ZFS Pool space utilization

Veeh

Well-Known Member
Jul 2, 2017
70
14
48
38
Hi,

I creating this thread regarding ZFS Pool space utilization because I would like to make sense of how it is working.

I have a ZFS Pool with 5 NVME drive of 1T.
The pool is in raidz1 configuration.
I did not enabled compression to get the maximum preformances.
I enabled thin provisioning in the datacenter storage tab.

My understanding is that 1 drive will be use for parity and then taken out of the available space.
And I do see in the host ZFS tab that the pool is 5TB and 3.87T are available.

There is 5 vm disk currently in zfs pool.
The total size of all 5 drive is 1.7TB

But the space used on the zfs pool is 2.6T on the 3.87T.

I'm wondering where does the extra 1T come from.
I have been running this host for quite some time. And created/remove VM with ~300GB disk size each time.
Is there a maintenance I need to do to remove old stuff ?
I'm wondering if I may have old vmdisk still in there and there are not listed because the VM ID does not exist anymore.
Or does this make sense and it's just extra room used by the system ?

Thanks
 
Search the forum for "padding overhead". When using 5 disks in raidz1 with the default ashift of 12 and default volblocksize of 8K you will lose 60% of the total capacity when using virtual disks for VMs.
5x 1TB disks = 5TB raw capacity (this is what zpool list will show you as capacity)
-20% parity loss = 4 TB capacity (this is what zfs list will show you as "usable" capacity)
Everything stored in a zvol will be 160% in size because of padding overhead, so you lose another 1.5TB of that 4TB = 2.5TB usable (because 160% of 2.5TB data = 4TB consumed).
And then a ZFS pool shouldn't be filled up more than 80%. So you again lose 20% of that 2.5TB and you will end up with real 2TB that can actually be used to store data.
You would need to destroy every zvol and recreate them with a volblocksize of atleast 32K to prevent that padding overhead. Then you would end up with the best case scenario of real 3.2TB of usable capacity for data (and really bad MySQL/posgresql performance because every 4K/8K/16K IO operation would need to read/write a full 32K block).

Also keep in mind that IOPS performance scales with the number of vdevs and not the number of disks. So a 5 disk raidz1 won't get a better IOPS performance than a single disk pool. Only throughput performance scales with the number of disks.
 
Last edited:
  • Like
Reactions: Veeh
I did not know about padding overhead. ZFS is still relatively new to me.
I'll look into that.
Thank you Dunuin.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!