Can someone please explain this about zfs ?

ozgurerdogan

Renowned Member
May 2, 2010
604
5
83
Bursa, Turkey, Turkey
I noted that older zfs dataset is more space it uses than it really does. More clearly, kvm disk has only 51.53 GB in use.

When I move this dataset to new node, it moves 152 GB;
Code:
root@s1:~# pve-zsync sync --source 118 --dest 1.2.3.4:D4 --verbose
send from @rep_default_2019-12-08_20:59:49 to rpool/data/vm-118-disk-1@rep_default_2019-12-08_23:01:59 estimated size is 152G

But dataset values are:

Code:
NAME                      PROPERTY              VALUE                  SOURCE
rpool/data/vm-118-disk-1  type                  volume                 -
rpool/data/vm-118-disk-1  creation              Fri Sep  7 20:58 2018  -
rpool/data/vm-118-disk-1  used                  291G                   -
rpool/data/vm-118-disk-1  available             1.17T                  -
rpool/data/vm-118-disk-1  referenced            117G                   -
rpool/data/vm-118-disk-1  compressratio         1.35x                  -
rpool/data/vm-118-disk-1  reservation           none                   default
rpool/data/vm-118-disk-1  volsize               150G                   local
rpool/data/vm-118-disk-1  volblocksize          8K                     default
rpool/data/vm-118-disk-1  checksum              on                     default
rpool/data/vm-118-disk-1  compression           on                     inherited from rpool
rpool/data/vm-118-disk-1  readonly              off                    default
rpool/data/vm-118-disk-1  createtxg             12900                  -
rpool/data/vm-118-disk-1  copies                1                      default
rpool/data/vm-118-disk-1  refreservation        155G                   local
rpool/data/vm-118-disk-1  guid                  6107262585135632680    -
rpool/data/vm-118-disk-1  primarycache          all                    default
rpool/data/vm-118-disk-1  secondarycache        all                    default
rpool/data/vm-118-disk-1  usedbysnapshots       20.3G                  -
rpool/data/vm-118-disk-1  usedbydataset         117G                   -
rpool/data/vm-118-disk-1  usedbychildren        0B                     -
rpool/data/vm-118-disk-1  usedbyrefreservation  154G                   -
rpool/data/vm-118-disk-1  logbias               latency                default
rpool/data/vm-118-disk-1  dedup                 off                    default
rpool/data/vm-118-disk-1  mlslabel              none                   default
rpool/data/vm-118-disk-1  sync                  standard               inherited from rpool
rpool/data/vm-118-disk-1  refcompressratio      1.26x                  -
rpool/data/vm-118-disk-1  written               858M                   -
rpool/data/vm-118-disk-1  logicalused           183G                   -
rpool/data/vm-118-disk-1  logicalreferenced     147G                   -
rpool/data/vm-118-disk-1  volmode               default                default
rpool/data/vm-118-disk-1  snapshot_limit        none                   default
rpool/data/vm-118-disk-1  snapshot_count        none                   default
rpool/data/vm-118-disk-1  snapdev               hidden                 default
rpool/data/vm-118-disk-1  context               none                   default
rpool/data/vm-118-disk-1  fscontext             none                   default
rpool/data/vm-118-disk-1  defcontext            none                   default
rpool/data/vm-118-disk-1  rootcontext           none                   default
rpool/data/vm-118-disk-1  redundant_metadata    all                    default


How can I reduce it to real sizes? Do I need to zerofill disk ?
 
Hi,

this can have two reasons.
1.) you use a thin-provided zfs pool and so you have to enable discard on the vdisk and run in the VM fstrim to release the unused blocks.
2.) you have snapshots that need the data.
3.) a combination of the 2 points before.
 
Where do you get the 51GB that the disk has? All the ZFS properties show a much higher usage.
  • referenced
  • usedbysnapshots
  • usedbydataset
  • logicalused
  • logicalreferenced
 
From kvm:

Code:
[root@ns34 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda7       976M  674M  252M  73% /
tmpfs           1.7G   28K  1.7G   1% /dev/shm
/dev/vda1        93M   76M   13M  87% /boot
/dev/vda8        88G   53G   34G  61% /home
/dev/vda6       2.0G  124M  1.7G   7% /tmp
/dev/vda3        20G  2.4G   17G  13% /usr
/dev/vda2        30G  6.9G   22G  25% /var

This is not only machine. All other kvms are same inc. windows ones.
 
Those are the partitions. Can you check how big the disk /dev/vda is within the VM?
Run lsblk if you have it or sgdisk -p /dev/vda.
 
also, depending on pool configuration, you might have additional overhead especially with raidz. the zvol has 8k blocks, the VM might write in <= 4k blocks, and ZFS then might need to write an additional 4k parity block for each datablock.
 
Code:
[root@ns34 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda    252:0    0  150G  0 disk
├─vda1 252:1    0  100M  0 part /boot
├─vda2 252:2    0   30G  0 part /var
├─vda3 252:3    0   20G  0 part /usr
├─vda4 252:4    0    1K  0 part
├─vda5 252:5    0    8G  0 part [SWAP]
├─vda6 252:6    0    2G  0 part /tmp
├─vda7 252:7    0    1G  0 part /
└─vda8 252:8    0 88.9G  0 part /home

So I need to release free space ? How? :)
 
Code:
[root@ns34 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda    252:0    0  150G  0 disk
├─vda1 252:1    0  100M  0 part /boot
├─vda2 252:2    0   30G  0 part /var
├─vda3 252:3    0   20G  0 part /usr
├─vda4 252:4    0    1K  0 part
├─vda5 252:5    0    8G  0 part [SWAP]
├─vda6 252:6    0    2G  0 part /tmp
├─vda7 252:7    0    1G  0 part /
└─vda8 252:8    0 88.9G  0 part /home

So I need to release free space ? How? :)
If you count all the partitions together we are at 150GB. If we count the Used column in the answer earlier we get to about 62.5GB.

The way to do this would be to shrink the file systems within the partitions, then resize the partitions. You would also need to move the partitions because you got so many of them and got some larger ones in the middle. And only then can you shrink the disk.

Do make a backup before you try to do this because you can easily make a mistake in that whole process!
 
This is what I to learn. I tried zerofilling, defragmenting disk but they do not seem to help much. Only couple of GBs is reduced.

So you suggest shrink the partition to used size and extend again? Do you think if clonning kvm from proxmox might help ?
 
Let's take a step back for a second. What exactly do you want to achieve and why?
 
I want to reduce size of backups taken from proxmox, snaphots by zfs and dataset size of kvms. Because as time pass by, with disk read/write/deletes, disk size increases unnecessarily. So I need release free space. Were I able to explain ?
 
Okay, so the disk should stay that size but unused space should not be backed up.

I think what is causing the use of ~150GB on the backup target is the set refreservation property for the dataset.
This causes ZFS to reserve this space and show it as being "used". This is the opposite of thin provisioning.

What happens if you disable the refreservation and therefore converting it to a thin provisioned disk?
Code:
zfs set refreservation=none rpool/data/vm-118-disk-1
 
It did not make any difference. But you already gave me an idea. I will shrink disk to used size and extend it again. I feel like it will reduce dataset to used amount. What do you think?
 
I don't know if this will work, never have been in that situation myself. Have you tried to issue a trim command from within the VM once discard is enabled for the disk as wolfgang suggested earlier?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!