Out of space but VM storage doesn't add up?

thatISEguy

New Member
Oct 4, 2022
5
0
1
I've been running Proxmox VE (currently 7.3-4) for a while now and installing/removing VMs for testing. I just noticed when I tried to do a NextCloud VM with 2.5 TB of storage that my data store showed it was out of space even though it had over 3.5 TB free. I was able to install by using 2 TB of storage instead.

The problem is that the amount of space used by the VMs already installed doesn't add up to use up all the space. I had 2 VMs and 1 container with the following allocations:

vm-102-disk-0 = 34.36 GB
vm-105-disk-0 = 34.36 GB
subvol-104-disk-0 = 34.46 GB

The new VM, 101, has a disk size of 2.15 TB. All together, the data store shows I'm using 4.01 TB out of 4.66 TB. I get this output from "zfs list -o space":

Code:
NAME                           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
Data_Store2                     602G  3.65T        0B    185K             0B      3.65T
Data_Store2/subvol-104-disk-0  31.3G   678M        0B    678M             0B         0B
Data_Store2/vm-101-disk-0      4.11T  3.53T        0B   6.64G          3.53T         0B
Data_Store2/vm-102-disk-0       635G  57.9G        0B   25.0G          32.9G         0B
Data_Store2/vm-105-disk-0       658G  57.9G        0B   1.57G          56.3G         0B

Could there be lingering files from all the VM installs and removals? If so, how do I clean them up?
 
You are probably using a raidz1/2/3 with a too low volblocksize. Then everything stored on a zvol will consume more space than it should because of padding overhead. See here: https://web.archive.org/web/2020020...or-how-i-learned-stop-worrying-and-love-raidz

You also got alot of refreservation. So you either not set your pool to use thin provisioning or your discard/trim isn't working so ZFS can't free up space of deleted stuff.
 
Last edited:
  • Like
Reactions: thatISEguy
You may be right. When I look at the details of the RAID-Z in the GUI, I see RAIDZ1-0. There are 10 disks. Nine are used and 1 is a spare. I used the GUI to create the ZFS configuration and don't think I set a block size. I only set compression to LZ4. Everything else was default.

Thanks for the link. Would it be a good idea to run "zpool trim" or another command (since these aren't SSDs) against that data store just in case to reclaim the free space?
 
Last edited:
Thanks for the link. Would it be a good idea to run "zpool trim" or another command (since these aren't SSDs) against that data store just in case to reclaim the free space?
To trim zvols you should:
1.) make sure all your Guest OSs discard properly...Linux VMs for example should mount filesystems with the "discard" option in the fstab or run a weekly fstrim -a via cron/service.
2.) make sure you use a storage protocol that supports discard..."IDE" or "virtio block" won't support it but "SCSI" with "virtio SCSI" as controller
3.) make sure for every of your virtual disks the "discard" checkbox is set in your VMs hardware tab
4.) your physical disk controller has to support discard too (not all raid controllers will do that)

You may be right. When I look at the details of the RAID-Z in the GUI, I see RAIDZ1-0. There are 10 disks. Nine are used and 1 is a spare. I used the GUI to create the ZFS configuration and don't think I set a block size. I only set compression to LZ4. Everything else was default.
With a 9 disk raidz1 and a shift of 12 I would set the volblocksize to at least 64K. Also keep in mind that the volblocksize can only bet set at creation. It can't be changed later. To fix this you would need to change the "block size" of your ZFS storage (Datacenter -> Storage -> NameOfYourZFSStorage -> Edit -> Block Size: 64K) and then destroy and recreate all virtual disks. Easiest way to recreate them is to migrate the VMs to another node and migrate them back or to backup a VM and then restore it.
In case you run some postgresql/MySQL DBs this would be a bad idea, as every IO smaller than 64K will then cause terrible overhead.
 
Last edited:
  • Like
Reactions: thatISEguy
Why using a spare sitting around waiting for a failure? If you rebuild things consider a z2 setup. It will help you during recovery scenarios.
I set it up that way when I started playing around with Proxmox and ZFS for the first time. Thought it would be a good idea to have a hot spare. Realized later that I should change it but never did.
 
Thanks for the info. Really appreciate the help.

I had enough space on my other data store to move the disks over before reconfiguring the second data store. Set it up as a RAIDZ2 using all 10 disk instead of leaving 1 spare. Should I still use 64k as a block size with 10 disk?
 
I had enough space on my other data store to move the disks over before reconfiguring the second data store. Set it up as a RAIDZ2 using all 10 disk instead of leaving 1 spare. Should I still use 64k as a block size with 10 disk?
jup
 
Awesome. Seriously appreciate the assistance here. Made the change and now migrating VMs/CTs back over to it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!