zfs raw vm taking up full space after migration

Hi

I have four VMs on a ZFS pool:
Code:
NAME                   USED  AVAIL     REFER  MOUNTPOINT
pool01                18.7T  2.97T      128K  /pool01
pool01/vm-100-disk-0   530G  3.30T      189G  -
pool01/vm-100-disk-1  7.52T  9.63T      882G  -
pool01/vm-101-disk-0  6.66T  3.12T     6.52T  -
pool01/vm-102-disk-0  1.33T  3.00T     1.30T  -
pool01/vm-103-disk-0  2.67T  5.56T     82.2G  -
rpool                 14.7G   200G      104K  /rpool
rpool/ROOT            14.6G   200G       96K  /rpool/ROOT
rpool/ROOT/pve-1      14.6G   200G     10.6G  /
rpool/data             152K   200G       96K  /rpool/data

Now, I made an error and stored two of the VMs in rpool and I wanted to correct that and migrated them to pool01.
The error I see now is that the disks now take up the full space defined in Proxmox.
Could this error be that I didn't have QEMU guest agent installed with guest trim after disk move?

How can I solve this?
Can I either move it back to rpool but with some option to only use the disks pace it needs?
Alternative is that I set up a pool02 with more disks and have more space?

Thanks in advance!
 
The error I see now is that the disks now take up the full space defined in Proxmox.
Could this error be that I didn't have QEMU guest agent installed with guest trim after disk move?
No, it's probably because the Thin provision option was not set on the storage where you created it.
How can I solve this?
I expect that changing the refreservation of the disk (to 0) wlll achieve this.
 
No, it's probably because the Thin provision option was not set on the storage where you created it.

I expect that changing the refreservation of the disk (to 0) wlll achieve this.
Thank you, Yes you are right, thin provision was not on.

Is it recommended to change refreservation? There is an option to add another vdev but these 5TB disks just using around 300GB is a mess to handle.
 
Is it recommended to change refreservation?
All zvols are thin due to the nature of ZFS. All refreservation does is make sure there will be enough room for the whole zvol. Setting it to 0 (none) just means that the size is not subtracted from the free space (hence the name reservation). It will not change anything on the zvol itself and you can therefore easily undo it when you change your mind in the future.
Check refreservation in the ZFS documentation if you need some more information about it.

EDIT: I forgot to mention that this is also how Proxmox does it: if the ZFS storage is thin then refreservation is left 0, else it is set to the virtual disk size.
 
Last edited:
  • Like
Reactions: Veidit

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!