Size on ZFS does not match to VM?

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi
I was wondering if someone could shed some light on the issue im having, Recently did P2V from a windows server which was installed on bare metal,
Whats odd is that on vm-100-disk-3 shows used 1.11TB but i check the VM of windows only shows 85 gigs used

Code:
root@prometheus:~# zfs get all rpool/data
NAME        PROPERTY              VALUE                  SOURCE
rpool/data  type                  filesystem             -
rpool/data  creation              Wed Dec 19 17:26 2018  -
rpool/data  used                  1.23T                  -
rpool/data  available             472G                   -
rpool/data  referenced            96K                    -
rpool/data  compressratio         1.07x                  -
rpool/data  mounted               yes                    -
rpool/data  quota                 none                   default
rpool/data  reservation           none                   default
rpool/data  recordsize            128K                   default
rpool/data  mountpoint            /rpool/data            default
rpool/data  sharenfs              off                    default
rpool/data  checksum              on                     default
rpool/data  compression           on                     inherited from rpool
rpool/data  atime                 off                    inherited from rpool
rpool/data  devices               on                     default
rpool/data  exec                  on                     default
rpool/data  setuid                on                     default
rpool/data  readonly              off                    default
rpool/data  zoned                 off                    default
rpool/data  snapdir               hidden                 default
rpool/data  aclinherit            restricted             default
rpool/data  createtxg             9                      -
rpool/data  canmount              on                     default
rpool/data  xattr                 on                     default
rpool/data  copies                1                      default
rpool/data  version               5                      -
rpool/data  utf8only              off                    -
rpool/data  normalization         none                   -
rpool/data  casesensitivity       sensitive              -
rpool/data  vscan                 off                    default
rpool/data  nbmand                off                    default
rpool/data  sharesmb              off                    default
rpool/data  refquota              none                   default
rpool/data  refreservation        none                   default
rpool/data  guid                  1006928319264730185    -
rpool/data  primarycache          all                    default
rpool/data  secondarycache        all                    default
rpool/data  usedbysnapshots       0B                     -
rpool/data  usedbydataset         96K                    -
rpool/data  usedbychildren        1.23T                  -
rpool/data  usedbyrefreservation  0B                     -
rpool/data  logbias               latency                default
rpool/data  dedup                 off                    default
rpool/data  mlslabel              none                   default
rpool/data  sync                  disabled               inherited from rpool
rpool/data  dnodesize             legacy                 default
rpool/data  refcompressratio      1.00x                  -
rpool/data  written               0                      -
rpool/data  logicalused           1.31T                  -
rpool/data  logicalreferenced     40K                    -
rpool/data  volmode               default                default
rpool/data  filesystem_limit      none                   default
rpool/data  snapshot_limit        none                   default
rpool/data  filesystem_count      none                   default
rpool/data  snapshot_count        none                   default
rpool/data  snapdev               hidden                 default
rpool/data  acltype               off                    default
rpool/data  context               none                   default
rpool/data  fscontext             none                   default
rpool/data  defcontext            none                   default
rpool/data  rootcontext           none                   default
rpool/data  relatime              off                    default
rpool/data  redundant_metadata    all                    default
rpool/data  overlay               off                    default

Code:
root@prometheus:~# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     1.29T   472G    96K  /rpool
rpool/ROOT                60.4G   472G    96K  /rpool/ROOT
rpool/ROOT/pve-1          60.4G   472G  60.4G  /
rpool/data                1.23T   472G    96K  /rpool/data
rpool/data/vm-100-disk-2   116G   472G  65.9G  -
rpool/data/vm-100-disk-3  1.11T   472G   909G  -
rpool/data/vm-101-disk-1   947M   472G   873M  -
rpool/swap                8.50G   474G  7.22G  -

1598496235266.png

Code:
agent: 1
boot: cdn
bootdisk: virtio0
cores: 2
memory: 8000
name: Zeus
net0: virtio=B2:EC:B8:81:9C:F8,bridge=vmbr0
numa: 0
onboot: 1
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=2faba325-30ad-417c-ad3f-64d8e77d4828
sockets: 1
virtio0: local-zfs:vm-100-disk-2,cache=writeback,size=976762584K
virtio1: local-zfs:vm-100-disk-3,backup=0,cache=writeback,size=976762584K
 
if you move with P2V the empty blocks are copied over too.

You should probably set "discard" in the disk definition, so you can regain the zero blocks.
But be aware: this works only with Windows 10 or 2012 up

Windows 7/2008R2 only support discard with ATA not with SCSI
 
thanks for the reply so i added the discard option but on the zfs pool still shows the same space
1598590899424.png
 
it depends heavily on the Windows Version, any version before Server 2012 or 8 does not support the scsi Unmap command

Also it does not clean up immediately, you should also look for this:

fsutil behavior query disabledeletenotify

If the parameter is set to "1" trim/unmap is disabled and will not free up space.
 
Thanks for the reply, currently running windows server 2012r2 and the behavior returns 0 so maybe it needs some more time?

1598760266627.png
 
What i also did is defrag the disk to see maybe that might be the issue what it did drop it to 911G but only using around 80 gigs

rpool/data/vm-100-disk-3 911G 710G 792G -

could the config of the vm might be the issue?
agent: 1
boot: cdn
bootdisk: virtio0
cores: 2
memory: 8000
name: Zeus
net0: virtio=B2:EC:B8:81:9C:F8,bridge=vmbr0
numa: 0
onboot: 1
ostype: win8
scsi1: local-zfs:vm-100-disk-3,backup=0,discard=on,size=976762584K
scsihw: virtio-scsi-pci
smbios1: uuid=2faba325-30ad-417c-ad3f-64d8e77d4828
sockets: 1
virtio0: local-zfs:vm-100-disk-2,cache=writeback,size=976762584K
 
Hi

post your zfs get all rpool/data/vm-100-disk-3
your /etc/pve/storage.cfg
The default blocksize 8k consumes a lot of space.
I found that the 32K blocksize is the best for disk space coherency and usage and I have good results for running windows in raidz-2.

regards
Steeve
 
Thanks for the reply, these were the outputs, as for the 8k how could i change it to 32k block size?

root@prometheus:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup
maxfiles 1
shared 0

zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1


root@prometheus:~# zfs get all rpool/data/vm-100-disk-3
NAME PROPERTY VALUE SOURCE
rpool/data/vm-100-disk-3 type volume -
rpool/data/vm-100-disk-3 creation Sat Dec 22 12:34 2018 -
rpool/data/vm-100-disk-3 used 911G -
rpool/data/vm-100-disk-3 available 710G -
rpool/data/vm-100-disk-3 referenced 792G -
rpool/data/vm-100-disk-3 compressratio 1.02x -
rpool/data/vm-100-disk-3 reservation none default
rpool/data/vm-100-disk-3 volsize 932G local
rpool/data/vm-100-disk-3 volblocksize 8K default
rpool/data/vm-100-disk-3 checksum on default
rpool/data/vm-100-disk-3 compression on inherited from rpool
rpool/data/vm-100-disk-3 readonly off default
rpool/data/vm-100-disk-3 createtxg 1701 -
rpool/data/vm-100-disk-3 copies 1 default
rpool/data/vm-100-disk-3 refreservation none default
rpool/data/vm-100-disk-3 guid 11787056528117388327 -
rpool/data/vm-100-disk-3 primarycache all default
rpool/data/vm-100-disk-3 secondarycache all default
rpool/data/vm-100-disk-3 usedbysnapshots 120G -
rpool/data/vm-100-disk-3 usedbydataset 792G -
rpool/data/vm-100-disk-3 usedbychildren 0B -
rpool/data/vm-100-disk-3 usedbyrefreservation 0B -
rpool/data/vm-100-disk-3 logbias latency default
rpool/data/vm-100-disk-3 dedup off default
rpool/data/vm-100-disk-3 mlslabel none default
rpool/data/vm-100-disk-3 sync disabled inherited from rpool
rpool/data/vm-100-disk-3 refcompressratio 1.02x -
rpool/data/vm-100-disk-3 written 799M -
rpool/data/vm-100-disk-3 logicalused 931G -
rpool/data/vm-100-disk-3 logicalreferenced 808G -
rpool/data/vm-100-disk-3 volmode default default
rpool/data/vm-100-disk-3 snapshot_limit none default
rpool/data/vm-100-disk-3 snapshot_count none default
rpool/data/vm-100-disk-3 snapdev hidden default
rpool/data/vm-100-disk-3 context none default
rpool/data/vm-100-disk-3 fscontext none default
rpool/data/vm-100-disk-3 defcontext none default
rpool/data/vm-100-disk-3 rootcontext none default
rpool/data/vm-100-disk-3 redundant_metadata all default
 
Hi

Storage with zfs is not simple to understand for me to !!!!
But the size of your vm is the size of volsize 932G propertie, you need to go the windows storage admin to show used and unused space .

rpool/data/vm-100-disk-3 used 911G
rpool/data/vm-100-disk-3 usedbysnapshots 120G
rpool/data/vm-100-disk-3 usedbydataset 792G

Here, you have space used by the dataset without snapshot and space used by snapshot who give you the totality used.
After, you need to use another blocksize to have a space that matches your usage.
Search on this forum and on the www for blocksize zfs
https://forum.proxmox.com/threads/u...idz3-pool-vs-mirrored-pool.65018/#post-293908

If you have found the right blocksize and want to change your blocksize you have to use dd or ddrescue to clone your vm to a raw image.
My storage.cfg:
zfspool: zfs-vm
zmarina / data pool
blocksize 32k
sparse
content images, rootdir

then you create your zvol (the blocksize that suits you) with the same volsize 932G, you transfer your raw image. you will see that your usedbydataset will match the use of the vm, then you can easily reduce the volsize with zfs.
 
Thanks for the reply, so i thought it was a windows thing but i tried with a linux vm same issue the vm using 4.5tb but on proxmox shows using 8.5tb the double which im confused why i have the discard options enable, i know that the config option i have 7tb but not using the 7tb

and the service fstrim is enabled on the vm

root@cloud:~# sudo systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Wed 2020-09-02 19:47:46 -05; 14min ago
Trigger: Mon 2020-09-07 00:00:00 -05; 4 days left
Docs: man:fstrim

Sep 02 19:47:46 cloud systemd[1]: Started Discard unused blocks once a week.

rpool/data/vm-146-disk-1 8.57T 1.59T 8.57T -



root@cloud:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.2G 0 7.2G 0% /dev
tmpfs 1.5G 688K 1.5G 1% /run
/dev/vda1 126G 11G 110G 9% /
tmpfs 7.2G 0 7.2G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.2G 0 7.2G 0% /sys/fs/cgroup
data 6.8T 4.5T 2.4T 66% /data
tmpfs 1.5G 0 1.5G 0% /run/user/0


Code:
root@pve:~# cat /etc/pve/qemu-server/146.conf
agent: 1
bootdisk: virtio0
cores: 4
ide2: local:iso/ubuntu-18.04-server-amd64.iso,media=cdrom
memory: 15000
name: Cloud
net0: virtio=DA:21:E2:3F:EE:D2,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: beforefixspace
scsihw: virtio-scsi-pci
smbios1: uuid=44462f87-ed62-4409-8e4e-3f066e9dcb5b
sockets: 1
virtio0: local-zfs:vm-146-disk-0,size=128G
virtio1: local-zfs:vm-146-disk-1,discard=on,size=7100G
vmgenid: bc5d7c04-0d55-4d94-8dbf-c6bc17f01a65
 
Hi

Storage with zfs is not simple to understand for me to !!!!
But the size of your vm is the size of volsize 932G propertie, you need to go the windows storage admin to show used and unused space .

rpool/data/vm-100-disk-3 used 911G
rpool/data/vm-100-disk-3 usedbysnapshots 120G
rpool/data/vm-100-disk-3 usedbydataset 792G

Here, you have space used by the dataset without snapshot and space used by snapshot who give you the totality used.
After, you need to use another blocksize to have a space that matches your usage.
Search on this forum and on the www for blocksize zfs
https://forum.proxmox.com/threads/u...idz3-pool-vs-mirrored-pool.65018/#post-293908

If you have found the right blocksize and want to change your blocksize you have to use dd or ddrescue to clone your vm to a raw image.
My storage.cfg:
zfspool: zfs-vm
zmarina / data pool
blocksize 32k
sparse
content images, rootdir

then you create your zvol (the blocksize that suits you) with the same volsize 932G, you transfer your raw image. you will see that your usedbydataset will match the use of the vm, then you can easily reduce the volsize with zfs.

So
would i first

zfs create -V 128gb -b 32k rpool/data/vm-100-disk-5

only issue when running this command it creates the disk but shows used the 128 gigs any way to create it empy?

EDIT: I had to add to the storage config blocksize 32k and then create the disk on the webgui to get the vol size so im guessing i need to DD?

dd if=rpool/data/vm-100-disk-3 of=rpool/data/vm-100-disk-5

Something like this?
 
Last edited:
thanks for the reply,
so i had another VM same issue which was a test server, i did the procedure above but whats odd is that the VM using 6gigs and on ZFS show differently


root@osc:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 955M 0 955M 0% /dev
tmpfs 196M 3.1M 193M 2% /run
/dev/vda1 195G 3.7G 182G 2% /
tmpfs 976M 0 976M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 976M 0 976M 0% /sys/fs/cgroup
tmpfs 100K 0 100K 0% /run/lxcfs/controllers
tmpfs 196M 0 196M 0% /run/user/0





rpool/data/vm-106-disk-0 39.0G 394G 39.0G -

root@prometheus2:~# zfs get all rpool/data/vm-106-disk-0
NAME PROPERTY VALUE SOURCE
rpool/data/vm-106-disk-0 type volume -
rpool/data/vm-106-disk-0 creation Thu Sep 3 13:41 2020 -
rpool/data/vm-106-disk-0 used 39.0G -
rpool/data/vm-106-disk-0 available 391G -
rpool/data/vm-106-disk-0 referenced 39.0G -
rpool/data/vm-106-disk-0 compressratio 1.13x -
rpool/data/vm-106-disk-0 reservation none default
rpool/data/vm-106-disk-0 volsize 200G local
rpool/data/vm-106-disk-0 volblocksize 8K default
rpool/data/vm-106-disk-0 checksum on default
rpool/data/vm-106-disk-0 compression on inherited from rpool
rpool/data/vm-106-disk-0 readonly off default
rpool/data/vm-106-disk-0 createtxg 10596274 -
rpool/data/vm-106-disk-0 copies 1 default
rpool/data/vm-106-disk-0 refreservation none default
rpool/data/vm-106-disk-0 guid 8780195534579167739 -
rpool/data/vm-106-disk-0 primarycache all default
rpool/data/vm-106-disk-0 secondarycache all default
rpool/data/vm-106-disk-0 usedbysnapshots 0B -
rpool/data/vm-106-disk-0 usedbydataset 39.0G -
rpool/data/vm-106-disk-0 usedbychildren 0B -
rpool/data/vm-106-disk-0 usedbyrefreservation 0B -
rpool/data/vm-106-disk-0 logbias latency default
rpool/data/vm-106-disk-0 dedup off default
rpool/data/vm-106-disk-0 mlslabel none default
rpool/data/vm-106-disk-0 sync standard inherited from rpool
rpool/data/vm-106-disk-0 refcompressratio 1.13x -
rpool/data/vm-106-disk-0 written 39.0G -
rpool/data/vm-106-disk-0 logicalused 43.8G -
rpool/data/vm-106-disk-0 logicalreferenced 43.8G -
rpool/data/vm-106-disk-0 volmode default default
rpool/data/vm-106-disk-0 snapshot_limit none default
rpool/data/vm-106-disk-0 snapshot_count none default
rpool/data/vm-106-disk-0 snapdev hidden default
rpool/data/vm-106-disk-0 context none default
rpool/data/vm-106-disk-0 fscontext none default
rpool/data/vm-106-disk-0 defcontext none default
rpool/data/vm-106-disk-0 rootcontext none default
rpool/data/vm-106-disk-0 redundant_metadata all default

after running DD on the new disk with the 32k it did lower it but still high



root@prometheus2:~# zfs get all rpool/data/vm-106-disk-1
NAME PROPERTY VALUE SOURCE
rpool/data/vm-106-disk-1 type volume -
rpool/data/vm-106-disk-1 creation Thu Sep 3 22:17 2020 -
rpool/data/vm-106-disk-1 used 28.9G -
rpool/data/vm-106-disk-1 available 387G -
rpool/data/vm-106-disk-1 referenced 28.9G -
rpool/data/vm-106-disk-1 compressratio 1.51x -
rpool/data/vm-106-disk-1 reservation none default
rpool/data/vm-106-disk-1 volsize 200G local
rpool/data/vm-106-disk-1 volblocksize 32K -
rpool/data/vm-106-disk-1 checksum on default
rpool/data/vm-106-disk-1 compression on inherited from rpool
rpool/data/vm-106-disk-1 readonly off default
rpool/data/vm-106-disk-1 createtxg 10603761 -
rpool/data/vm-106-disk-1 copies 1 default
rpool/data/vm-106-disk-1 refreservation none default
rpool/data/vm-106-disk-1 guid 9612949235200090184 -
rpool/data/vm-106-disk-1 primarycache all default
rpool/data/vm-106-disk-1 secondarycache all default
rpool/data/vm-106-disk-1 usedbysnapshots 0B -
rpool/data/vm-106-disk-1 usedbydataset 28.9G -
rpool/data/vm-106-disk-1 usedbychildren 0B -
rpool/data/vm-106-disk-1 usedbyrefreservation 0B -
rpool/data/vm-106-disk-1 logbias latency default
rpool/data/vm-106-disk-1 dedup off default
rpool/data/vm-106-disk-1 mlslabel none default
rpool/data/vm-106-disk-1 sync standard inherited from rpool
rpool/data/vm-106-disk-1 refcompressratio 1.51x -
rpool/data/vm-106-disk-1 written 28.9G -
rpool/data/vm-106-disk-1 logicalused 43.8G -
rpool/data/vm-106-disk-1 logicalreferenced 43.8G -
rpool/data/vm-106-disk-1 volmode default default
rpool/data/vm-106-disk-1 snapshot_limit none default
rpool/data/vm-106-disk-1 snapshot_count none default
rpool/data/vm-106-disk-1 snapdev hidden default
rpool/data/vm-106-disk-1 context none default
rpool/data/vm-106-disk-1 fscontext none default
rpool/data/vm-106-disk-1 defcontext none default
rpool/data/vm-106-disk-1 rootcontext none default
rpool/data/vm-106-disk-1 redundant_metadata all default
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!