moved a disk and baffled by reported disk use now

Jago

Member
May 9, 2019
9
1
23
41
So I just took 2 NVME SSDs and via the Proxmox UI I made them into a ZFS mirror and added to available storage. I then migrated one of my VM disks to the new pool and am now baffled by the result: https://pastebin.com/JuKbEzdP

Why is "used" for the exact same disk 30.3G on the original pool and 132G on the new one? Compression is enabled on both. 132G is obviously the total amount of space the disk CAN use in theory, but on the old pool it obviously only uses what it actually needs to.

This is problematic as the idea was to significantly overprovision the SSD space. Under Disks/ZFS I still see that "Allocated" is what I expect it to be. Yet, after moving a few more disks I cannot actually continue further as Proxmox gives me an "out of space" error despite Disks/ZFS showing 380gb free and 63gb allocated.

NAME USED AVAIL REFER MOUNTPOINT

rpool/data/vm-100-disk-0 12.7G 3.31T 12.7G -
nvme-vol1/vm-100-disk-0 132G 122G 12.7G -

How?

Tried manually running trim inside one of the VMs where the entire 128gb C-drive is suddenly allocated in full on the host: "Optimize-Volume -DriveLetter C -ReTrim -Verbose". I see that 80gb+ was cleared and only ~30gb is supposed to be in use by actual data. Yet even after rebooting the VM entirely, Proxmox won't budge. And yes, I do obviously have discard=on and use VirtIO SCSI and drives before being moved behave as expected.

Also tried "qm agent VMID fstrim" and "qm rescan" but no change. On one of the drives, I've also tried filling it completely "for real" and then deleting the garbage data. No effect. Even more weird stuff: moving the disks from the new pool back to the old one works as expected, even though the original disk looks "fat", after the move, the disk takes only what it actually needs from the host. Moving the disk back to the new pool again results in the disk using all allocated space immideately.

EDIT: I figured out a solution, but not the cause:

zfs set refreservation=none nvme-vol1/vm-101-disk-0

makes it behave the way I want when the disk has been moved to the new pool. The question remains why this change happens when moving disks from old pool to the new one.
 
Last edited:
Did you enable the "Thin provision" checkbox for the new storage? If it is not enabled the reservations are set on the ZFS volumes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!