Out of space: really ?

digit23

Member
Oct 12, 2020
8
1
8
49
1688320380809.png

1688320692457.png

According to summary I have 2.85TB used out of 5.16TB = 2.31TB left

Why 1.5TB disk creation fail with "out of space" ?
 
Let me guess...
You storage is a raidz1/2/3 and you didn't increase the blocksize before creating your first zvols? If yes then this is normal because of padding overhead every zvol will consume way more space.
 
7x 1TB disks in raidz1 with the default 8K volblocksize would result in 3,5TB of usable space for zvols and because a ZFS pool will become slow and fragment faster when filling it up too much only 2.8 - 3.15 TB (80-90%) should be used. And thats TB, so only 2.55 - 2.86 TiB.

To not waste half of your raw capacity you would need to destroy all virtual disks of all VMs and recreate them after increasing the "block size" of your ZFS storage. I would recommend to use a volblockize of 32K, as a too big volblocksize again wastes space nd performance when doing small IO.
 
Last edited:
Zvols consuming space on 7 disk raidz1 created with ashift=12:
Data stored on VMs virtual disks actually consumes space (parity loss not included):
4K volblocksize171%
8K volblockize171%
16K volblockize128%
32K volblockize107%
64K volblockize107%
128K volblockize102%
256K volblockize102%
512K volblockize101%
1M volblockize101%
 
Last edited:
  • Like
Reactions: _gabriel
Holly shit !!! 275G -> 449G

root@prox01:~# zfs get volsize,refreservation,used vm/vm-108-disk-0
NAME PROPERTY VALUE SOURCE
vm/vm-108-disk-0 volsize 275G local
vm/vm-108-disk-0 refreservation 449G local
vm/vm-108-disk-0 used 449G -
 
Holly shit !!! 275G -> 449G

root@prox01:~# zfs get volsize,refreservation,used vm/vm-108-disk-0
NAME PROPERTY VALUE SOURCE
vm/vm-108-disk-0 volsize 275G local
vm/vm-108-disk-0 refreservation 449G local
vm/vm-108-disk-0 used 449G -
Yes, so 163% size in practice and 171% in theory.

In case you are running DBs or you got similar workloads that do a lot of small IO I would highly recommend to create a striped mirror (raid10). 8x 1TB disks in a striped mirror would give you 4 times the IOPS performance, you could use a 16K volblocksize, easier to add more storage when needed, better reliability, resilvering time would be way lower and there is no padding overhead with zvols, so the full 4TB are really usable.

Padding overhead, by the way, also only effects zvols and not datasets, so LXCs could use the full 7TB.

And if you still want to use a raidz1, the easiest way to change the volblocksize would be:
1.) in PVE webUI go to Datacenter -> Storage -> select your ZFS storage -> Edit -> set something like "32K" as your "Block size"
2.) stop and backup a VM
3.) verify the backup
4.) restore that VM from backup overwriting the existing VM
5.) repeat step 2 to 4 until all VMs are replaced.
 
Last edited:
Instead of backup / restore
If I set "block size" to 32K, clone VM, delete original. Does the new one will be 32K aligned ?
 
Last edited:
I don't think cloning will work as a "zfs clone" will reference the old snapshotted data and you can't destroy the cloned zvol/dataset as long as the clone exists.
You really need to write all data again. If you got enough empty space you could copy the whole zvol locally using "zfs send | zfs recv" , rename the old zvol, rename the new zvol, test if the VM is still working and only then destroy the old zvol.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!