Why is 128k used for added ZFS storage?

IsThisThingOn

Well-Known Member
Nov 26, 2021
290
116
48
I could be very wrong, but this is how I understand file size / record size / compression in PBS:

- Backups on the host are done in 4MB chunks.
- With compression enabled by default, ZSTD might compress them a little bit.
- With compression enabled by default on a PBS destination that uses ZFS, LZ4 is probably unable to compress the chunks any further.
- LZ4 is able to compress the tails. For example, with a 4MB record size on the PBS and a 3MB incoming chunk, that chunk can be saved into two 2MB records, where the first record is full and the second record only halve full, so the trailing zeros can be compressed by LZ4 and we end up with a 3MB write

That is why I am wondering why PBS uses the default record size of 128k.
Is it just because ZFS upstream?
Would PBS not immensely profit from using 4MB instead of 128k?

This seems especially strange when adding a new ZFS pool in the GUI.
There is a compression and ashift setting, but no record size setting.

I get that you might want to leave the boot at 128k for compatibility reasons, but when adding a datastore, 4MB seems like a better default.
Or even just offering it in the GUI would be cool.