Help understanting ZFS disk usage by vm disks

dzanon

Renowned Member
Nov 22, 2016
6
0
66
46
Hi everybody,
I have a zfs pool that is filling up but I can't understand why
1769163678882.png

But the sum of all vm disks doesn't match with used space above
1769163734719.png

Only vm-1107 has snapshots
Code:
# zpool list rpoolData
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpoolData  2.27T  1.34T   945G        -         -    31%    59%  1.00x  ONLINE  -

Code:
# zfs list rpoolData
NAME        USED  AVAIL  REFER  MOUNTPOINT
rpoolData  1.24T  57.3G   170K  /rpoolData

Code:
# zfs get used,usedbydataset,usedbysnapshots rpoolData/vm-1107-disk-0
NAME                      PROPERTY         VALUE     SOURCE
rpoolData/vm-1107-disk-0  used             4.13M     -
rpoolData/vm-1107-disk-0  usedbydataset    817K      -
rpoolData/vm-1107-disk-0  usedbysnapshots  0B        -

# zfs get used,usedbydataset,usedbysnapshots rpoolData/vm-1107-disk-1
NAME                      PROPERTY         VALUE     SOURCE
rpoolData/vm-1107-disk-1  used             787G      -
rpoolData/vm-1107-disk-1  usedbydataset    383G      -
rpoolData/vm-1107-disk-1  usedbysnapshots  0B        -

The pool/dataset is used only for vm, no zfs-direcory configured, I can't figure out what's eating up all the space.
Please kindly advice.

Thanks
 
Please share
Bash:
zfs list -tall -ospace,reservation,refreservation
qm config 1107
Also check this inside the VM (assuming it's debian/ubuntu)
Bash:
apt install -y gdu
gdu /
 
Last edited:
Please share
Bash:
zfs list -tall -ospace,reservation,refreservation
qm config 1107
Also check this inside the VM (assuming it's debian/ubuntu)
Bash:
apt install -y gdu
gdu /

Code:
 #zfs list -tall -ospace,reservation,refreservation
NAME                                           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  RESERV  REFRESERV
DataMirror                                     98.1G   351G        0B     96K             0B       351G    none       none
DataMirror/vm-1103-disk-0                       238G   203G        0B   63.0G           140G         0B    none       203G
DataMirror/vm-1103-disk-1                       165G   148G        0B   81.1G          67.2G         0B    none       148G
rpool                                           108G  6.45G        0B    104K             0B      6.45G    none       none
rpool/ROOT                                      108G  2.87G        0B     96K             0B      2.87G    none       none
rpool/ROOT/pve-1                                108G  2.87G        0B   2.87G             0B         0B    none       none
rpool/data                                      108G    96K        0B     96K             0B         0B    none       none
rpool/var-lib-vz                                108G  3.51G        0B   3.51G             0B         0B    none       none
rpoolData                                      57.3G  1.24T        0B    170K             0B      1.24T    none       none
rpoolData/vm-1103-disk-0                       57.3G  3.33M        0B    817K          2.54M         0B    none      3.33M
rpoolData/vm-1103-disk-4                       70.4G  13.5G        0B    402M          13.1G         0B    none      13.5G
rpoolData/vm-1103-disk-5                       98.9G   472G        0B    431G          41.6G         0B    none       472G
rpoolData/vm-1107-disk-0                       57.3G  4.13M        0B    817K          3.33M         0B    none      3.33M
rpoolData/vm-1107-disk-0@OK_12-01-26      -     0B         -       -              -          -       -          -
rpoolData/vm-1107-disk-1                        462G   787G        0B    383G           405G         0B    none       405G
rpoolData/vm-1107-disk-1@OK_12-01-26      -     0B         -       -              -          -       -          -

Code:
 #qm config 1107
agent: 1
bios: ovmf
boot: order=scsi2;sata0
cores: 10
cpu: x86-64-v2-AES
efidisk0: data:vm-1107-disk-0,efitype=4m,size=1M
memory: 32768
meta: creation-qemu=8.1.5,ctime=1719247597
name: VMName
net0: e1000e=BC:24:11:9F:46:B7,bridge=vmbr2
numa: 0
onboot: 1
ostype: l26
parent: OK_12-01-26
sata0: none,media=cdrom
scsi2: data:vm-1107-disk-1,iothread=1,size=300G
scsihw: virtio-scsi-single
smbios1: uuid=39698857-b840-4efe-a250-6f5efc912d41
sockets: 1
vmgenid: d32cda83-5abd-47f5-9b11-b6474257a1d7

Code:
gdu /

gdu ~ Use arrow keys to navigate, press ? for help                                                                                                                          
 --- / ---                                                                                                                                                                    
  105.4 GiB ██████    ▏/opt                                                                                                                                                  
   41.8 GiB ██▍       ▏/root                                                                                                                                                  
   15.0 GiB ▊         ▏/home                                                                                                                                                  
    6.1 GiB ▎         ▏/var                                                                                                                                                  
    2.7 GiB           ▏/usr                                                                                                                                                  
  229.8 MiB           ▏/boot                                                                                                                                                  
   41.2 MiB           ▏/etc                                                                                                                                                  
   19.7 MiB           ▏initramfs-3.10.0-1062.1.2.el7.x86_64.img                                                                                                              
   48.0 KiB           ▏/tmp                                                                                                                                                  
   12.0 KiB           ▏/applications-merged                                                                                                                                  
e   4.0 KiB           ▏/srv                                                                                                                                                  
e   4.0 KiB           ▏/mnt                                                                                                                                                  
e   4.0 KiB           ▏/media                                                                                                                                                
@       0 B           ▏sbin                                                                                                                                                  
@       0 B           ▏lib64                                                                                                                                                  
@       0 B           ▏lib                                                                                                                                                    
@       0 B           ▏bin                                                                                                                                                    
        0 B           ▏.autorelabel
 
Hi @Impact ,
thanks for reaching out, if I understand correctly the next steps are:
1) remove the snapshot(s)
2) enable thin provision on the dataset
3) run
Code:
zfs list -H -oname,refreservation | grep -Ev "none" | awk '/vm-/ {print $1'} | while read disk; do echo "zfs set refreservation=none $disk"; done
to enable thin provision on existing VMs

Correct?
Would the last command impact on running VMs performances, or is it somewhat a dangerous command to run on a production env?

Thanks
 
The full steps are
  • Remove snapshot
  • Enable Thin provision for the ZFS Storage(s) in Datacenter > Storage. This part is somewhat optional but it's default for a ZFS install and I recommend it
  • Run the command(s) the snippet prints out to disable refreservation for existing datasets
  • Enable discard for the virtual disk(s) in the VM's Hardware tab. Shut it down and start again or reboot via the GUI's Reboot button so the change is applied
  • Run fstrim -av inside the VM. The value it prints isn't really important but it makes me feel better to have the feedback
  • Check/monitor if it helped with watch -n1 zfs list -tall -ospace,reservation,refreservation. This can take a little bit to be truly done
I don't consider any of these dangerous but as always with advice from strangers, there's no guarantees.
 
Last edited:
The full steps are
  • Remove snapshot
  • Enable Thin provision for the ZFS Storage(s) in Datacenter > Storage. This part is somewhat optional but it's default for a ZFS install and I recommend it
  • Run the command(s) the snippet prints out to disable refreservation for existing datasets
  • Enable discard for the virtual disk(s) in the VM's Hardware tab. Shut it down and start again or reboot via the GUI's Reboot button so the change is applied
  • Run fstrim -av inside the VM. The value it prints isn't really important but it makes me feel better to have the feedback
  • Check/monitor if it helped with watch -n1 zfs list -tall -ospace,reservation,refreservation. This can take a little bit to be truly done
I don't consider any of these dangerous but as always with advice from strangers, there's no guarantees.
So,
I checked and I can confirm that Thin Provision is not enabled on the dataset, nor Discard on the VMs disks,
I'll run all the steps after last backup has finished so to be sure I can roll back if anything goes awry.
Thank you very much for helping me, I'll post the result when the job is done, and thanks for that github gist, it's really helpful!

Cheers