ZFS-2 Showing total usable storage at half of what it should be

GuineaPig352

New Member
Jun 27, 2021
2
0
1
18
Currently I have made a ZFS-2 made up of four 500GB SSD's. When I look at the size from the ZFS menu it shows the proper size.

Screenshot (2).png

The problem is from the node menu and the disk select menu for creating virtual machines it shows the total capacity is significantly reduced.Screenshot (1).png
I Would like to know how to fix this. I have seen other threads saying that its drive padding or something along those lines. I'm not looking for what it is I'm just looking for how to solve it. Complete beginner when it comes to Proxmox and ZFS and have minor experience with hardware raids so this type of problem is unheard of for me. What should I do?
 
First, what do you mean by "ZFS-2"? Are you talking about a "raidz2 ZFS pool"?
With raidz2 you always will loose 50% of the capacity due to parity data if only using 4 disks (it is basically a raid6 so always 2 drives are used for parity). So it isn't possible to have more than 1TB of storage if using 4x 500GB disks.
Padding overhead you are getting if the volblocksize of the pool isn't optimized. Lets say you didn't changed the default volblocksize from 8K to something higher like 16K or 256K. Now you are loosing a additional 17% of the raw capacity due to padding overhead and only 0.66TB are usable in reality. But Proxmox won't show you this because in theory 1TB is usable. But everything you write to the virtual disks is 150% in size, so after writing 0.66TB of data to the virtual disks 1TB of storage will be used for that.

And Proxmox is reporting the size in GiB not GB. Your 871 GiB capacity are the same as 935GB. So that is fine because you will always loose some of the capacity.

By the way...raidz2 isn't great as a VM storage. You would get way better performance and the same capacity when using a striped mirror (like raid10) instead. Raidz2 is only better if you don't care about the performance of the drives and really want the a bit better redundancy (any 2 drives may fail with raidz2, 1 to 2 drives with striped mirror).
 
Last edited:
First, what do you mean by "ZFS-2"? Are you talking about a "raidz2 ZFS pool"?
With raidz2 you always will loose 50% of the capacity due to parity data if only using 4 disks (it is basically a raid6 so always 2 drives are used for parity). So it isn't possible to have more than 1TB of storage if using 4x 500GB disks.
Padding overhead you are getting if the volblocksize of the pool isn't optimized. Lets say you didn't changed the default volblocksize from 8K to something higher like 16K or 256K. Now you are loosing a additional 17% of the raw capacity due to padding overhead and only 0.66TB are usable in reality. But Proxmox won't show you this because in theory 1TB is usable. But everything you write to the virtual disks is 150% in size, so after writing 0.66TB of data to the virtual disks 1TB of storage will be used for that.

And Proxmox is reporting the size in GiB not GB. Your 871 GiB capacity are the same as 935GB. So that is fine because you will always loose some of the capacity.

By the way...raidz2 isn't great as a VM storage. You would get way better performance and the same capacity when using a striped mirror (like raid10) instead. Raidz2 is only better if you don't care about the performance of the drives and really want the a bit better redundancy (any 2 drives may fail with raidz2, 1 to 2 drives with striped mirror).
Thank you i reworked my system for a raidz and not a raidz2 ZFS pool. Final two questions. When giving a virtual machine a drive of lets say 750GB, the summary page says I have a total of 1,2 TB used even though the only storage being used is that 750GB drive. Last question. Can I resize the volbblocksize to something different? if so would something like 16K and would that give me useable space?
 
Thank you i reworked my system for a raidz and not a raidz2 ZFS pool. Final two questions. When giving a virtual machine a drive of lets say 750GB, the summary page says I have a total of 1,2 TB used even though the only storage being used is that 750GB drive.
That the padding overhead. When using raidz1 with 4 drives, ashift=12 and a volblocksize of 8K you will loose 25% of the raw capacity to parity and another 25% of raw capacity to padding. So ZFS will tell you you got 1.5TB of usable capacity but everything is 150% in size again, so after using 1TB inside the VM 1.5TB are used on the pool and the pool is full. So in theory you can only store 1TB. And also keep in mind that ZFS is a copy-on-write firesystem. Because of that you always need a bit of free space on the drive. After using 80% the pool will get slow and after using 90% ZFS will switch into panic mode. So 10-20% should always be kept free (best you set a ZFS quota for that so it can'T be used at all) and then only 0.8TB are usable. And if you want to use snapshots too, these also need space. If you keep the snapshots for too long or you got alot of file changes then the snapshots can consume a multiple of the data itself. For example you might want to keep another 50% free for snapshots. Then you are down to 0.4 TB of usable space for virtual disks.
If you don't want to loose 25% of the raw storage to padding you need to increase the volblocksize. With a volblocksize of 16K you should only loose around 8% to padding and with a 64K volblocksize only around 2%. But the higher your volblocksize is, the more write amplification you should get and the more space you loose when writing small files like DB queries.
Last question. Can I resize the volbblocksize to something different? if so would something like 16K and would that give me useable space?
Volblocksize can't be changed later. It is only set at creation of your virtual disks. So you need to destroy and recreate them. The best way is to change the volblocksize for the pool (Datacenter -> Storage -> YourZfsPool -> Edit -> Block size), then backup all VMs and restore the backup by overwriting the old VMs. This should delete the old virtual disks and create new ones with the same content but the new volblocksize.
 
Last edited:
Volblocksize can't be changed later. It is only set at creation of your virtual disks. So you need to destroy and recreate them.
Hi,

It is a better way. You can create a new dataset. Add this dataset in your cluster/storage and set the desired volblocksize.

Then move your vdisk from the current storage to this new storage.

The optimized volblocksize is only a part of the problem. The other part is the fact that most OS's will use by default 512b. In most cases you will have better performance if your OS block size will be the same as zvolblock size. In other cases, you will have RMW at the zpool level(on raidz1,2,3 will be worse).

Good luck / Bafta.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!