ZFS "refreservation" property twice the size of the VM

Toxik

Active Member
Jul 11, 2019
51
4
28
Germany
Hi,

I've noticed that my vm-200-disk-0 eats up 6 TB of disk space although it's disk is set to 3 TB only:
Code:
root@pve1 ~# zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                     12.7G   202G      104K  /rpool
rpool/ROOT                12.7G   202G       96K  /rpool/ROOT
rpool/ROOT/pve-1          12.7G   202G     12.7G  /
rpool/data                  96K   202G       96K  /rpool/data
vmpool                    6.40T   360G      192K  /vmpool
vmpool/subvol-100-disk-0   983M   360G      983M  /vmpool/subvol-100-disk-0
vmpool/subvol-303-disk-0  2.11G  30.0G     1.98G  /vmpool/subvol-303-disk-0
vmpool/vm-101-disk-0      65.0G   414G     10.4G  -
vmpool/vm-200-disk-0      6.13T  6.44T     38.8G  -
vmpool/vm-700-disk-0      49.2G   392G     10.7G  -
vmpool/vm-800-disk-0      32.5G   392G      112K  -
vmpool/vm-900-disk-0       130G   484G     5.18G  -

root@pve1 ~# zfs get refreservation vmpool/vm-200-disk-0
NAME                  PROPERTY        VALUE      SOURCE
vmpool/vm-200-disk-0  refreservation  6.09T      local

root@pve1 ~# zfs get refreservation vmpool/vm-101-disk-0
NAME                  PROPERTY        VALUE      SOURCE
vmpool/vm-101-disk-0  refreservation  65.0G      local

root@pve1 ~# zfs get refreservation vmpool/vm-900-disk-0
NAME                  PROPERTY        VALUE      SOURCE
vmpool/vm-900-disk-0  refreservation  130G       local

It is caused by the "refreservation" property, which is twice the size of the VM disks for all VMs.

Why is this?
 
Hello! I guess it is the other problem described there, because the "refreservation" was reported there is zero ("zfs get all" reported it as "none"). But this problem has refreservation with size of a VM disk...

I have had this problem just a minute ago... and "solved" it for myself with this command:
Code:
zfs set refreservation=none <the pool name>/<the datased which have a nonzero refreservation>

But... I was ready that my VM will take the space that was tuned for it, all in one moment, and every snapshot will be located outside of this space. I was NOT ready for this space was doubled immediately when the ProxMox replication started and created a snapshot to send to other node. And after doubling this space had not reduced even when the full sync finished and the first snapshot has been deleted. Maybe it was happened something wrong... The ProxMox is
Code:
pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

The dataset had these properties:
Code:
zfs get all myzfspool/vm-100-disk-0
NAME                        PROPERTY              VALUE                  SOURCE
myzfspool/vm-100-disk-0  type                  volume                 -
myzfspool/vm-100-disk-0  creation              Thu Dec 24 14:17 2020  -
myzfspool/vm-100-disk-0  used                  507G                   -
myzfspool/vm-100-disk-0  available             415G                   -
myzfspool/vm-100-disk-0  referenced            249G                   -
myzfspool/vm-100-disk-0  compressratio         1.00x                  -
myzfspool/vm-100-disk-0  reservation           none                   default
myzfspool/vm-100-disk-0  volsize               250G                   local
myzfspool/vm-100-disk-0  volblocksize          8K                     default
myzfspool/vm-100-disk-0  checksum              on                     default
myzfspool/vm-100-disk-0  compression           off                    default
myzfspool/vm-100-disk-0  readonly              off                    default
myzfspool/vm-100-disk-0  createtxg             89                     -
myzfspool/vm-100-disk-0  copies                1                      default
myzfspool/vm-100-disk-0  refreservation        258G                   local
myzfspool/vm-100-disk-0  guid                  9227610209314921855    -
myzfspool/vm-100-disk-0  primarycache          all                    default
myzfspool/vm-100-disk-0  secondarycache        all                    default
myzfspool/vm-100-disk-0  usedbysnapshots       33.3M                  -
myzfspool/vm-100-disk-0  usedbydataset         249G                   -
myzfspool/vm-100-disk-0  usedbychildren        0B                     -
myzfspool/vm-100-disk-0  usedbyrefreservation  258G                   -
myzfspool/vm-100-disk-0  logbias               latency                default
myzfspool/vm-100-disk-0  objsetid              134                    -
myzfspool/vm-100-disk-0  dedup                 off                    default
myzfspool/vm-100-disk-0  mlslabel              none                   default
myzfspool/vm-100-disk-0  sync                  standard               default
myzfspool/vm-100-disk-0  refcompressratio      1.00x                  -
myzfspool/vm-100-disk-0  written               33.3M                  -
myzfspool/vm-100-disk-0  logicalused           248G                   -
myzfspool/vm-100-disk-0  logicalreferenced     248G                   -
myzfspool/vm-100-disk-0  volmode               default                default
myzfspool/vm-100-disk-0  snapshot_limit        none                   default
myzfspool/vm-100-disk-0  snapshot_count        none                   default
myzfspool/vm-100-disk-0  snapdev               hidden                 default
myzfspool/vm-100-disk-0  context               none                   default
myzfspool/vm-100-disk-0  fscontext             none                   default
myzfspool/vm-100-disk-0  defcontext            none                   default
myzfspool/vm-100-disk-0  rootcontext           none                   default
myzfspool/vm-100-disk-0  redundant_metadata    all                    default
myzfspool/vm-100-disk-0  encryption            off                    default
myzfspool/vm-100-disk-0  keylocation           none                   default
myzfspool/vm-100-disk-0  keyformat             none                   default
myzfspool/vm-100-disk-0  pbkdf2iters           0                      default
and after removing refreservation:

Code:
zfs get all myzfspool/vm-100-disk-0
NAME                        PROPERTY              VALUE                  SOURCE
myzfspool/vm-100-disk-0  type                  volume                 -
myzfspool/vm-100-disk-0  creation              Thu Dec 24 14:17 2020  -
myzfspool/vm-100-disk-0  used                  249G                   -
myzfspool/vm-100-disk-0  available             415G                   -
myzfspool/vm-100-disk-0  referenced            249G                   -
myzfspool/vm-100-disk-0  compressratio         1.00x                  -
myzfspool/vm-100-disk-0  reservation           none                   default
myzfspool/vm-100-disk-0  volsize               250G                   local
myzfspool/vm-100-disk-0  volblocksize          8K                     default
myzfspool/vm-100-disk-0  checksum              on                     default
myzfspool/vm-100-disk-0  compression           off                    default
myzfspool/vm-100-disk-0  readonly              off                    default
myzfspool/vm-100-disk-0  createtxg             89                     -
myzfspool/vm-100-disk-0  copies                1                      default
myzfspool/vm-100-disk-0  refreservation        none                   local
myzfspool/vm-100-disk-0  guid                  9227610209314921855    -
myzfspool/vm-100-disk-0  primarycache          all                    default
myzfspool/vm-100-disk-0  secondarycache        all                    default
myzfspool/vm-100-disk-0  usedbysnapshots       37.2M                  -
myzfspool/vm-100-disk-0  usedbydataset         249G                   -
myzfspool/vm-100-disk-0  usedbychildren        0B                     -
myzfspool/vm-100-disk-0  usedbyrefreservation  0B                     -
myzfspool/vm-100-disk-0  logbias               latency                default
myzfspool/vm-100-disk-0  objsetid              134                    -
myzfspool/vm-100-disk-0  dedup                 off                    default
myzfspool/vm-100-disk-0  mlslabel              none                   default
myzfspool/vm-100-disk-0  sync                  standard               default
myzfspool/vm-100-disk-0  refcompressratio      1.00x                  -
myzfspool/vm-100-disk-0  written               37.2M                  -
myzfspool/vm-100-disk-0  logicalused           248G                   -
myzfspool/vm-100-disk-0  logicalreferenced     248G                   -
myzfspool/vm-100-disk-0  volmode               default                default
myzfspool/vm-100-disk-0  snapshot_limit        none                   default
myzfspool/vm-100-disk-0  snapshot_count        none                   default
myzfspool/vm-100-disk-0  snapdev               hidden                 default
myzfspool/vm-100-disk-0  context               none                   default
myzfspool/vm-100-disk-0  fscontext             none                   default
myzfspool/vm-100-disk-0  defcontext            none                   default
myzfspool/vm-100-disk-0  rootcontext           none                   default
myzfspool/vm-100-disk-0  redundant_metadata    all                    default
myzfspool/vm-100-disk-0  encryption            off                    default
myzfspool/vm-100-disk-0  keylocation           none                   default
myzfspool/vm-100-disk-0  keyformat             none                   default
myzfspool/vm-100-disk-0  pbkdf2iters           0                      default
 
Last edited:
How does your zfs pool look like (zpool status)?
And "vm-100-disk-0" is not a dataset its a zvol. If you are using any kind of raidz the default volblocksize is your problem.
If you are just using a mirror or striped mirror you should check if every VMs virtual HDD got the "discard" checkbox checked and if you told the guest OS to use discard/trim.
 
I'm sorry for too late answering...
How does your zfs pool look like (zpool status)?
Code:
zpool status -v
  pool: titanzfspool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 01:43:25 with 0 errors on Sun Sep 12 02:07:27 2021
config:

        NAME          STATE     READ WRITE CKSUM
        titanzfspool  ONLINE       0     0     0
          sda4        ONLINE       0     0     0
          data        ONLINE       0     0     0

errors: No known data errors
And "vm-100-disk-0" is not a dataset its a zvol.
Oh, yes. Sorry for my language...
If you are using any kind of raidz the default volblocksize is your problem.
I'm not sure I'm using it. No mirroring in the pool, that's all I know.
If you are just using a mirror or striped mirror you should check if every VMs virtual HDD got the "discard" checkbox checked and if you told the guest OS to use discard/trim.
Yes. The discard was disabled!.. And it was the first replication step, the full sync of the VM disk. When the full sync finished and I have enabled discard on the storage in ProxMox interface the size of zvol reduced twice.

Thank You!
 
  • Like
Reactions: i_am_jam

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!