Hi after getting new server, Im migrating VMs and Containers from one PVE node (8.3.4) to another (8.4.1). On 8.3.4 they live on ZFS raid1 2TB pool, on 8.4.1 they have 4TB ZFS raid1 pool. As recommended Im doing the migration via PBS.
What confuses me a lot is if I run
NODE: NAME USED LUSED REFER RATIO
8.3.4: rpool/data/vm-112-disk-0 44.4G 60.0G 44.4G 1.36x
8.4.1: ZFS_Mirror_1/vm-112-disk-0 102G 55.7G 35.5G 1.57x
* to note here the vm-112 is windows server 2019, in guest os C drive is like ~100G, 60full/40free. The vm config is identical (as expected after PBS restore), with
sata0: ZFS_Mirror_1:vm-112-disk-0,discard=on,size=100G,ssd=1
scsihw: virtio-scsi-pci
Why is the used on 8.4.1 twice the size? It looks like the thin provisioning is not working? Have I missed something?
What confuses me a lot is if I run
zfs list -o name,used,logicalused,referenced,compressratio
on both nodes i get for example on vm-112:NODE: NAME USED LUSED REFER RATIO
8.3.4: rpool/data/vm-112-disk-0 44.4G 60.0G 44.4G 1.36x
8.4.1: ZFS_Mirror_1/vm-112-disk-0 102G 55.7G 35.5G 1.57x
* to note here the vm-112 is windows server 2019, in guest os C drive is like ~100G, 60full/40free. The vm config is identical (as expected after PBS restore), with
sata0: ZFS_Mirror_1:vm-112-disk-0,discard=on,size=100G,ssd=1
scsihw: virtio-scsi-pci
Why is the used on 8.4.1 twice the size? It looks like the thin provisioning is not working? Have I missed something?