If a single byte is changed in a file, Duplicati will need to upload a new chunk. The downside to choosing a large chunk size, is that change detection and deduplication covers a larger area. When restoring, this is also a benefit, as more data can be streamed into the new file, and the data will likely span fewer remote files. If you have large files, choosing a large chunk size, will also reduce the storage overhead a bit. This effect is more noticeable if the database is stored on non-ssd disks (aka spinning disks). Internally each block needs to be stored, so having fewer blocks, means smaller (and thus faster) lookup tables. It is also possible to choose a smaller chunk size, but for most cases this has a negative impact. If you choose a larger chunk size, that will obviously generate “fewer but larger blocks”, provided your files are larger than the chunk size. Duplicati will abort the operation with an error if you attempt to change the chunk size on an existing backup. If a file is smaller than the chunk size, or the size is not evenly divisible by the block size, it will generate a block that is smaller than the chunk size.ĭue to the way blocks are referenced (by hashes), it is not possible to change the chunk size after the first backup has been made. The chunk size is set via the advanced option -block-size and is set to 100kb by default. The block sizeĪs Duplicati makes backups with blocks, aka “file chunks”, one option is to choose what size a “chunk” should be. This documents explains what these tradeoffs are and how to choose those that fit a specific backup best. Choosing these options optimally is a balance between different usage scenarios and has different tradeoffs. Some of these options are related to sizes of various elements. All options in Duplicati are chosen to fit a wide range of users, such that as few as possible of the users need to change settings.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |