[SOLVED] Proxmox ZFS storage available not matching

iwobic

New Member
May 17, 2021
7
0
1
41
Good morning all,

I bought a new DELL server with 8 480GB SSD's. 2 of which were configured in RAID1 and host the Proxmox installation, while the remaining 6 were configured in a RAIDZ1 pool and are hosting the vm and containers' virtual disks.
I have created 1 container with a 8gb volume and 3 virtual machines with respectively 60+1000GB, 100GB and 60GB disks. Totaling to 1228GB stored on the RAIDZ1 pool.
Now if I go to pve01>disks>ZFS the pool I created shows: size 2.78TB, free 2.76TB allocate 194.78GB.
Though when I go to vm-pool>summary it shows: Usage 97% (2.15TB of 2.22TB), and I'm not allowed to increase the size of my vm's

The numbers above do not make sense to me, I'm a bit confused, what am I doing wrong here?

thanks,
 
Last edited:
This is what it shows:

NAME USED AVAIL REFER MOUNTPOINT
rpool 8.39G 422G 104K /rpool
rpool/ROOT 8.38G 422G 96K /rpool/ROOT
rpool/ROOT/pve-1 8.38G 422G 8.38G /
rpool/data 96K 422G 96K /rpool/data
vm-pool 1.96T 62.0G 153K /vm-pool
vm-pool/subvol-103-disk-0 2.50G 5.50G 2.50G /vm-pool/subvol-103-disk-0
vm-pool/vm-100-disk-0 97.9G 121G 39.2G -
vm-pool/vm-100-disk-1 1.59T 1.65T 187M -
vm-pool/vm-101-disk-0 111G 140G 33.1G -
vm-pool/vm-102-disk-0 163G 155G 69.9G -
 
This is what it shows:

NAME USED AVAIL REFER MOUNTPOINT
rpool 8.39G 422G 104K /rpool
rpool/ROOT 8.38G 422G 96K /rpool/ROOT
rpool/ROOT/pve-1 8.38G 422G 8.38G /
rpool/data 96K 422G 96K /rpool/data
vm-pool 1.96T 62.0G 153K /vm-pool
vm-pool/subvol-103-disk-0 2.50G 5.50G 2.50G /vm-pool/subvol-103-disk-0
vm-pool/vm-100-disk-0 97.9G 121G 39.2G -
vm-pool/vm-100-disk-1 1.59T 1.65T 187M -
vm-pool/vm-101-disk-0 111G 140G 33.1G -
vm-pool/vm-102-disk-0 163G 155G 69.9G -
I bet you didn't changed the volblocksize to match your raidz2 pool (16K instead of 8K) so everything stored on that zvol ist double the size because of bad padding. So your 1TB zvol is using 1.59TB on the pool. Also keep in mind to enable discard for all virtual disks and tell the guest OS to use TRIM/discard or your pool don't delete data that is deleted inside the guest.

By the way, raidz2 isn't recommended as a VM storage because you get way better IOPS and latency if using a striped mirror because all the complex parity calculations by the CPU aren't needed.

But something looks wrong. The values of the "AVAIL" column should be the same for all zvols of the "vm-pool" pool. Did you setup any quotas or something like that?
 
Last edited:
Dunuin,

I haven't used RAIDZ2

It's...
2 disks in RAID1 on which Proxmox is installed.
6 disks in RAIDZ1 on which the vms are stored.


both local-zfs and vm-pool zfs are set to block size 8k
 
Last edited:
Ok, but same problem. All raidz arent great as VM storage. And with 6 drives and raidz1 you are still loosing 3 and not 1 drive in capacity because of bad padding if you use 8k vollbocksize, because everything stored will be 66% bigger. Look here. So for example 32K would be a better volblocksize (only 20% and not 50% capacity lost).
And volblocksize can only be set at creation of virtual disks. So you need to destroy and recreate every virtual disk to use another volblocksize.
 
Last edited:
Thanks Dunuin. The table you posted says 20% more used storage with 6 disks and 8k volblocksize.

What is the recommended volblocksize to avoid bad padding then?
 
Thanks Dunuin. The table you posted says 20% more used storage with 6 disks and 8k volblocksize.

What is the recommended volblocksize to avoid bad padding then?
No the table tells you 66% more storage used. First column is sectors (4k = 1 sector if the pool is using ashift of 12). So your 8K volblocksize is 2 sectors in the table. And the numbers in percent are not much much it is bigger but how much raw capacity of all of the drives you are loosing. So 50% means you are loosing 3 of 6 drives. You got 3TB raw capacity and if you write 1.5 TB to the pool 3 TB will be stored (1.5TB of data + 0.5TB parity + 1TB padding overhead). So ZFS will tell you that you got 2.5TB of usable capacity but becauae everything stored will be 66% bigger you only get usable capacity of 1.5TB. And a pool should only be filled up to 80% or it will get slow, so thats in reality only 1.2TB of really usable space.

Like I said, 20k or 32k would be a good value to try.
 
Last edited:
It worked.
I moved the vms and containers on an NFS storage, redeployed the zfs pool with 32k vloblocksize and moved the vms/containers back on to it.
 
RAID5/RAIDz1 is not recommended due to the possibility of URE on a 2nd disk after initial first disk failure.

If it is a production system run 3-way mirrors for safety or raidz2 at least.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!