Created new NVME-backed mirror on latest PVE. 8k volblocksize? Should I overrride to 16k?

Sep 1, 2022
166
25
28
40
Hello,

My understanding was that with the latest version of PVE, which I have updated to (this was a clean install of 8.0.3 that I've kept up to date), the volblocksize for new pools is now 16k.

I just created an NVME mirror pool in the GUI, with default settings, and it set it up as shown below.

I'm going to enable thin provisioning, but should I go ahead and change the block size to 16k? 64k?
Someday all this won't be confusing. ;)

Thanks!

Code:
# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.13-1-pve)


1711156264642.png
 
Depends...
16K will allows for better compression ratios, better performance on bigger IO and better data-to-metadata-ratio but performance when working with very small IO (512B to 8K) will be way worse. There is no single best volblocksize and there will always be trade-offs.
I like to create multiple datasets per pool, create multiple ZFS storages per pool in PVE (one dataset for each) using different volblocksizes. That way you can mix volblocksizes and put each virtual disk on the ZFS storage with the volblocksize that fits best your workload.

So if you can't decide between 8K/16K/64K use all three and decide for each virtual disk.
 
Last edited:
  • Like
Reactions: SInisterPisces
Depends...
16K will allows for better compression ratios, better performance on bigger IO and better data-to-metadata-ratio but performance when working with very small IO (512B to 8K) will be way worse. There is no single best volblocksize and there will always be trade-offs.
I like to create multiple datasets per pool, create multiple ZFS storages per pool in PVE (one dataset for each) using different volblocksizes. That way you can mix volblocksizes and put each virtual disk on the ZFS storage with the volblocksize that fits best your workload.

So if you can't decide between 8K/16K/64K use all three and decide for each virtual disk.
Thanks! Creating multiple pools and switching disks between them is a great idea, and now that I understand thin provisioning makes a lot more sense to me than it would have a few weeks ago. ;)

Realistically, for general workloads in a Linux VM (e.g., Debian, Ubuntu, etc., just doing end user stuff), where do you most often land?
And more importantly, how do you benchmark the performance?

I'm about to teach myself to use fio with some Tom Lawrence videos. One of my weekend projects. :)

I'm still a bit confused why the new pool didn't default to 16K, though. My understanding was that the default had changed in the latest release of PVE itself. Now I'm wondering if I have to have done a clean install for that to be the case. Hopefully a dev will see this and hit me with the clue stick. ;)
 
And more importantly, how do you benchmark the performance
Fio.
Realistically, for general workloads in a Linux VM (e.g., Debian, Ubuntu, etc., just doing end user stuff), where do you most often land?
Depends on the software. For a NAS or media server I would use something bigger as you primarily got big IO.
For something that is using a postgresql DB in the background I wouldn't use anything else than a 8K volblocksize.
You really have to look at the IO of your services and decide for each one.
 
Depends on the software. For a NAS or media server I would use something bigger as you primarily got big IO.
For something that is using a postgresql DB in the background I wouldn't use anything else than a 8K volblocksize.
You really have to look at the IO of your services and decide for each one.

Thanks! Specialized workflows are actually easier, I think, as the best blocksizes are documented.

What about just general Linux and Windows VMs?
I have some Linux flavors I want to try, and a Windows VM I want to set up for cloud gaming.
I have a TrueNAS server I'll be using for mass storage and accessing via iSCSI/NFS, so in this thread I'm mostly concerned about OS disk images.
 
Hello from the Future (for anyone who finds this later)...

Proxmox 8.2 dropped today. Among the release note was this. Hopefully it will fix the strange UI behavior I was seeing. However, I'm still seeing this behavior.

1713971399450.png

The release notes for 8.2 say:

When editing ZFS storages, display 16k as the blocksize placeholder to reflect the current ZFS defaults.
 
Browser Cache?

To verify run another, clean, browser or at new -profile and compare...
Just coming back here to note that clearing the cache fixed it. I happened to notice it wasn't showing the right version number.

Sorry for the confusion. I usually don't have to clear the cache on the minor updates and forgot to do it this time.
 
  • Like
Reactions: UdoB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!