ZFS drive using 40% more available space than given

Jan 24, 2023
21
1
3
Hi there,
I'm not sure if this is me misunderstanding how ZFS works, or if this is even related to ZFS. I'd love if it someone could help explain to me exactly what's going on, or figure out if it is a bug.

Latest version, up to date:
proxmox-ve : 7.3-1
pve-manager: 7.3-4

I have 5 8TB HDDs, I have setup a RAIDZ2, so I have 23.53 TB of available space on this ZFS pool.
On this zpool I have 5 VM disks, and 1 CT volume. Based on the size shown in the VM Disks and the CT Volumes menu, they take up in total 14211GB or 14.2TB.
The summary shows I am using 19.06TB of 23.53 TB.
Where is the other 4.8TB? What is that being used for?


If I create a new VM, with 1TB of space, my space usage goes from 19.06TB / 23.53 TB to a 20.51TB/23.53TB. This 1000GB is showing as 1.07TB. Based on my available space, it is using 1.45TB. That's 35% more than it should be using. What am I not understanding?
1674575606192.png
After creating a 1000GB drive. 1.45TB increase, instead of 1TB, or 1.07TB.
1674575658743.png


As a side note, under pve > Disks > ZFS it shows as 14.01 TB free space, 26TB allotted. This is entirely different from what's showing in the pool summary.
1674575929365.png


Thank you in advance. I hope to at least understand if there's an issue in what is being reported.
 
It's padding overhead, which causes your zvols to consume more space because you use a too low volblocksize. I explained that a dozen of times. Just search this forum for "padding overhead".

Also a good lecture on that topic to understand that padding overhead: https://web.archive.org/web/2021030...or-how-i-learned-stop-worrying-and-love-raidz
Thank you! Did not know about this padding overhead.
My block size is currently set at 8KB, and I do not have compression enabled. You recommend to delete the zpool, recreate it with 32KB block size and enable compression?

All 5 HDDs are 8TB Seagate Ironwolf NAS drives and I intend on running it as RAIDZ2 for redundancy against 2 HDD failures.
 
Last edited:
With 5 disk raidz2 I would at least set the volblocksize to 32K. But with that you are still losing some capacity due to padding overhead. To really get rid of that padding overhead the volblocksize would have to be way higher like 128K but that also would be a bad idea because everything written to zvols that is smaller 128K would then cause terrible overhead.

You don't need to destroy your pool for that. You just need to destroy all your zvols and recreate them, as the volblocksize can only be set a zvol creation. To change the volblocksize for newly created zvols you can go to "Datacenter -> Storage -> YourZFSstorage -> Edit" and set the "Block Size" to something like "32K". You could then backup and restore all those VMs, which then would destroy all existing zvols and recreate them with the new volblocksize.
 
Last edited:
With 5 disk raidz2 I would at least set the volblocksize to 32K. But with that you are still loosing some capacity due to padding overhead. To really get rid of that padding overhead the volblocksize would have to be way higher like 128K but that also would be a bad idea because everything written to zvols that is smaller 128K would then cause terrible overhead.

You don't need to destroy your pool for that. You just need to destroy all your zvols and recreate them, as the volblocksize can only be set a zvol creation. To change the volblocksize for newly created zvols you can go to "Datacenter -> Storage -> YourZFSstorage -> Edit" and set the "Block Size" to something like "32K". You could then backup and restore all those VMs, which then would destroy all existing zvols and recreate them with the new volblocksize.
I really appreciate the help. Thank you! That makes everything clear to me. Have a great day.
 
Hi! I had this exact problem on production VMs which couldn't be taken down for several hours or days in order to copy all the data. I was able to copy the data live using lvm (you can also migrate from ext4 to lvm with minimal downtime) to a new volume.

Here's the write-up :
Change ZFS volblock on a running proxmox VM
If anyone has a better idea let me know !
 
1. create dataset
2. configure dataset as storage with desired block size
3. move disk
 
not quite, but yes, it uses a feature provided by qemu (a mirror block job) in live mode, or qemu-img convert in offline mode.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!