VM Disk not adding up in ZFS gui

Prothane

Member
Nov 11, 2021
10
0
6
41
Milton, Ontario, Canada
www.prothane.ca
So not too sure how to explain. New to PVE. but when i'm looking at the ZFS pool (HDD) summary in the main tab (the graph) it showing huge amount of space has been taken up from VM disks 85.55%. About double of the max size of the .raw on that pool. I have thin provision on. but if I navigate to Server (VMServer1) / Disk / ZFS / (HDD) it show 20.40% has been taken up, make since with the thin provision and what installed. is this a bug? or should I worried if i add move VM's disk and the graph on the main tab goes over 100%.

Thanks,

Jeff McTear
 

Attachments

  • Screen Shot 2022-03-10 at 4.24.07 PM.png
    Screen Shot 2022-03-10 at 4.24.07 PM.png
    443.4 KB · Views: 8
  • Screen Shot 2022-03-10 at 4.23.38 PM.png
    Screen Shot 2022-03-10 at 4.23.38 PM.png
    420.8 KB · Views: 8
  • Screen Shot 2022-03-10 at 4.23.22 PM.png
    Screen Shot 2022-03-10 at 4.23.22 PM.png
    343.5 KB · Views: 8
Most of the time when a virtual disks uses way more space than expected then its one of these three things:

1.) You got old snapshots which prevent that data can be removed or space can be freed up. Check it with zfs list -o space -r YourPoolName. If "USEDSNAP" is very high remove old snapshots. Snapshots will grow and grow over time and after some months/years they can easily be a multiple of the size of the data your snapshotted. If you need restore points that are more than some days/weeks old, use PBS backups instead for long term restore points. They might save you some space compared to snapshots.

2.) Discard/TRIM isn't working. If TRIM commands aren't passed the complete chain from your guest OS to the ZFS pool, ZFS can't free up space so nothing will ever be deleted so your pool will grow over time. Make sure your guest OS is sending TRIM commands when it deletes something. Also make sure your VM is using a protocol like SCSI with virtio SCSI that supports TRIM and not IDE or virtio block. Make sure your virtual disk got the discard checkbox checked.

3.) Raidz with padding overhead because ZFS rookies don't get that you need to increase the volblocksize when using raidz1/2/3 or otherwise you will waste alot of space to padding overhead and everything you write to a zvol can be up to double the size. If thats also your case you can increase the volblocksize for newly created virtual disks by changing the pools block size (WebUI: Datacenter -> Storage -> YourPool -> Edit -> Block Size). But the volblocksize can only be set at creation so you would need to destroy and recreate all your existing virtual disks. Easiest way to do this is by backing up a VMs (vzdump or PBS) and then restoring these VMs replacing the old VMs. That way all virtual disks get recreated with the same data but using the new volblocksize. If you got a cluster you can alternatively migrate the VMs between two nodes with the same result. What volblocksize to choose depends on your ashift, if you use raidz1,raidz2 or raidz3 and the number of drives your raidz1/2/3 consists of. I would recommend reading this blog post of the ZFS creator. He explains why there is padding overhead and how to calculate the optimal volblocksize for your pool: https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz
 
Last edited:
Thanks Dunuin for helping me.

1:) Ran your cmd " zfs list -o space -r HDD " there was no USEDSNAP but it does tell a little bit of a story if you can take look at the attached picture VM 100 12.6 gb is the .raw size but the max size is 250g but look at what I highlighted see what you think. look at the "available" column 12bt for each VM but the container is what I set the max size in the setup.

2.) I do have the disk setup with virtio SCSI, but I didn't have discard checkbox checked. (turn it on now). I have ran a cmd on the vm windows to make sure the TRIM was on and it was.

3.) my setup is 5 disk @ 4tb with zfs set to raidz1 8k (25%) i change it to 16k (25%) then migrated the vm to another proxmox server. with no change move VM100 12.2gb (250gb max) .raw file still see it at 418gb "used" using your cmd. " zfs list -o space -r HDD "

look like the containers are behaving the way that I would expect just not the VM
 

Attachments

  • Capture1.JPG
    Capture1.JPG
    108.6 KB · Views: 5
3.) my setup is 5 disk @ 4tb with zfs set to raidz1 8k (25%) i change it to 16k (25%) then migrated the vm to another proxmox server. with no change move VM100 12.2gb (250gb max) .raw file still see it at 418gb "used" using your cmd. " zfs list -o space -r HDD "
With a 5 disk raidz1 you usually want a volblocksize of 32K to only loose (20% of your raw storage to parity/padding). With a volblocksize of 16K you will loose 33% and with a volblocksize of the default 8K even 50% of your raw capacity.
look like the containers are behaving the way that I would expect just not the VM
Padding overhead only effects blockdevices and not file systems. VMs use zvols (blockdevices) and LXCs use datasets (file systems) so only VMs should be affected by padding overhead.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!