Missing Storage in ZFS pool?

oguruma

Member
Mar 26, 2020
34
2
13
35
I have a RAID Z-1 Pool with 3x240gb drives.

Proxmox ZFS Tab shows me Size: 668 - Free: 370 - Allocated: 297

Under the pool's tab I see the various disk images totaling 312g of storage.

However, usage shows that it's 98.85% full (< 5g remaining).

What's going on here? Where is the remaining storage on the pool?

Other people use this box to poke around and experiment, so the only thing I can come up with is somebody deleted a VM and the image is still somewhere in the pool?
 
To get a better idea of how storage is being consumed, could you post the output of zfs list -t all -r <pool_name>?
 
Code:
NAME                      USED  AVAIL     REFER  MOUNTPOINT
vmpool-1r                 426G  4.94G      128K  /vmpool-1r
vmpool-1r/vm-100-disk-0  54.6G  33.9G     25.6G  -
vmpool-1r/vm-102-disk-0  43.7G  33.9G     14.8G  -
vmpool-1r/vm-103-disk-0  40.9G  9.29G     36.6G  -
vmpool-1r/vm-104-disk-0  27.3G  19.0G     13.2G  -
vmpool-1r/vm-106-disk-0  40.9G  23.9G     22.0G  -
vmpool-1r/vm-108-disk-0  40.9G  19.4G     26.5G  -
vmpool-1r/vm-109-disk-0   136G  90.1G     51.3G  -
vmpool-1r/vm-110-disk-0  40.9G  37.3G     8.62G  -

I think I am starting to see how this works.

When you create a VM and allocate space, that's the total space after format? So the total space allocated per VM will be greater than what you allocate in the UI?

For example, vm-109 was created with 100G of space, yet it shows 136G used in the pool.

If that's the case, then the figured posted above look right for 3x240G drives in RAIDZ-1, right?
 
To get a better idea of how storage is being consumed, could you post the output of zfs list -t all -r <pool_name>?
Code:
vmpool-1r                 426G  4.94G      128K  /vmpool-1r
vmpool-1r/vm-100-disk-0  54.6G  33.9G     25.6G  -
vmpool-1r/vm-102-disk-0  43.7G  33.9G     14.8G  -
vmpool-1r/vm-103-disk-0  40.9G  9.29G     36.6G  -
vmpool-1r/vm-104-disk-0  27.3G  19.0G     13.2G  -
vmpool-1r/vm-106-disk-0  40.9G  23.9G     22.0G  -
vmpool-1r/vm-108-disk-0  40.9G  19.4G     26.5G  -
vmpool-1r/vm-109-disk-0   136G  90.1G     51.3G  -
vmpool-1r/vm-110-disk-0  40.9G  37.3G     8.62G  -

I think I am starting to see how this works.

When you create a VM and allocate space, that's the total space after format? So the total space allocated per VM will be greater than what you allocate in the UI?

For example, vm-109 was created with 100G of space, yet it shows 136G used in the pool.

If that's the case, then the figured posted above look right for 3x240G drives in RAIDZ-1, right?
 
All the images are bigger because of padding overhead. If your pool was created using ashift=12 with 3 disk raidz1 you would need atleast a volblocksize of 16K. If you created all your virtual disks with the default volblocksize of 8K you will loose 33% of your raw capacity to parity + 17% of your raw capacity to padding. So with 8K volblocksize every zvol (but not dataset) should be 133% in size. With 16K only 100%. You can use the GUI to change the pools blocksize to 16K: Datacenter -> Storage -> YourPool -> Edit -> Block Size

But the volblocksize can only be set an creation of a zvol, so you need to destroy and recreate every virtual disk. Restoring a backup or migrating a VM should be the easiest way to do this because this also will destroy and recreate the zvols.
 
Last edited:
All the images are bigger because of padding overhead. If your pool was created using ashift=12 with 3 disk raidz1 you would need atleast a volblocksize of 16K. If you created all your virtual disks with the default volblocksize of 8K you will loose 33% of your raw capacity to parity + 17% of your raw capacity to parity. So with 8K volblocksize everything should be 133% in size. With 16K only 100%. You can use the GUI to change the pools blocksize to 16K: Datacenter -> Storage -> YourPool -> Edit -> Block Size

But the volblocksize can only be set an creation of a zvol, so you need to destroy and recreate every virtual disk. Restoring a backup or migrating a VM should be the easiest way to do this because this also will destroy and recreate the zvols.
Thank you very much for the input.

I'm a bit lost as to why this is even an option? What are the benefits of block sizing at 8k vs 16k?
 
If you want to use ZFS you need to learn how it works first. Just using default values will often result in bad performance, lost capacity or even total dataloss. That everything will be 33% bigger (so you are wasting a additional 17% of raw capacity) is totally expected and you can read here why. With a higher volblocksize like 16K there should be no padding so you shouldn't waste capacity to padding overhead anymore.
 
Last edited:
I get all that, but I guess I don't fully understand why 8k would be the default block size, at all....
 
I get all that, but I guess I don't fully understand why 8k would be the default block size, at all....
Because ZFS was developed for Solaris and there 8K is the default. Each pool needs another volblocksize and there can't be a default that works everywhere. That all depends on the number of drives, the type of your pool (mirror/ striped mirror/ raidz1 / raidz2 / raidz3) and your ashift. So everyone needs to calculate the optimal volblocksize and set it up before creating the first VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!