OpenZFS 2.0?

You are probably not using zstd but lz4 instead.
Well yes, all the existing pools were using lz4.

I was asking about upgrading them to use zstd.

So far I've set the compression=zstd-10 (I went aggressive because I know I won't be CPU-bottlenecked on writes. The default if you don't specify, compression=zstd is equivalent to zstd-3) property on my storage datasets, and have just left my boot rpool as-is.

So I was curious what the results would be about changing that one too. I know there's been some discussion of adding zstd support to bootloaders, and even using it as the kernel image compression format, so the advice and obstacles are still constantly changing around this feature.
 
Last edited:
I currently don't have anything specific in mind for GRUB and would hold off enabling zstd compression for the root pool.
But if you're using UEFI and did the setup with PVE 5.4 ISO or newer, you're using the VFAT EFI system partition to boot anyway (kernel and initrd are saved there) so its safe to do there as ZFS is not involved in the initial boot process.
 
  • Like
Reactions: tonofpudding
I am surprised the default got changed, and apparently its zstd-3 not zstd-1 the default.

The compression setting is becoming a cold storage vs live storage option, if you running servers, lz4 is going to be much much better, it wont even be close.

For something like a archive NAS, then zstd might be worth considering, but zstd-1 seems a much better compression/speed tradeoff than zstd-3.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!