ZFS free space ?? [SOLVED]

MEYNIER

Active Member
Feb 24, 2019
10
0
41
51
HI evryone
Sorry to disturb but zfs space available makes me crazy.

Long story short i got one node with 8 disk of 6 TB in raidz2 which gave me an pool of around 30 TiB

I got 20 aroung 20 VMs which should take around 13 TiB but i got 28 TiB occupied ????????

I think it s due to pve-zsync as i used it for sync all my VM to a FREENAS server but i don't understand why and how i can manage this.

To be honest i am complety lost
 
After looking more deeper it seems that the issue is coming from ZFS volsize or something like this.
Indeed if i look into detail of one VM (Linux debian with 3T assing in proxmox gui) i got this number

root@proxmox1:~/pve-zync# zfs get all STORAGE/vm-17003-disk-0
NAME PROPERTY VALUE SOURCE
STORAGE/vm-17003-disk-0 type volume -
STORAGE/vm-17003-disk-0 creation Sat Dec 21 13:32 2019 -
STORAGE/vm-17003-disk-0 used 5.08T -
STORAGE/vm-17003-disk-0 available 2.50T -
STORAGE/vm-17003-disk-0 referenced 2.54T -
STORAGE/vm-17003-disk-0 compressratio 1.32x -
STORAGE/vm-17003-disk-0 reservation none default
STORAGE/vm-17003-disk-0 volsize 1.50T local
STORAGE/vm-17003-disk-0 volblocksize 8K default
STORAGE/vm-17003-disk-0 checksum on default
STORAGE/vm-17003-disk-0 compression lz4 inherited from STORAGE
STORAGE/vm-17003-disk-0 readonly off default
STORAGE/vm-17003-disk-0 createtxg 10361504 -
STORAGE/vm-17003-disk-0 copies 1 default
STORAGE/vm-17003-disk-0 refreservation 1.55T local
STORAGE/vm-17003-disk-0 guid 17173177094446957103 -
STORAGE/vm-17003-disk-0 primarycache all default
STORAGE/vm-17003-disk-0 secondarycache all default
STORAGE/vm-17003-disk-0 usedbysnapshots 1.19T -
STORAGE/vm-17003-disk-0 usedbydataset 2.54T -
STORAGE/vm-17003-disk-0 usedbychildren 0B -
STORAGE/vm-17003-disk-0 usedbyrefreservation 1.35T -
STORAGE/vm-17003-disk-0 logbias latency default
STORAGE/vm-17003-disk-0 objsetid 66960 -
STORAGE/vm-17003-disk-0 dedup off default
STORAGE/vm-17003-disk-0 mlslabel none default
STORAGE/vm-17003-disk-0 sync disabled inherited from STORAGE
STORAGE/vm-17003-disk-0 refcompressratio 1.33x -
STORAGE/vm-17003-disk-0 written 204G -
STORAGE/vm-17003-disk-0 logicalused 2.05T -
STORAGE/vm-17003-disk-0 logicalreferenced 1.40T -
STORAGE/vm-17003-disk-0 volmode default default
STORAGE/vm-17003-disk-0 snapshot_limit none default
STORAGE/vm-17003-disk-0 snapshot_count none default
STORAGE/vm-17003-disk-0 snapdev hidden default
STORAGE/vm-17003-disk-0 context none default
STORAGE/vm-17003-disk-0 fscontext none default
STORAGE/vm-17003-disk-0 defcontext none default
STORAGE/vm-17003-disk-0 rootcontext none default
STORAGE/vm-17003-disk-0 redundant_metadata all default
STORAGE/vm-17003-disk-0 encryption off default
STORAGE/vm-17003-disk-0 keylocation none default
STORAGE/vm-17003-disk-0 keyformat none default
STORAGE/vm-17003-disk-0 pbkdf2iters 0 default

used 5.08T ?????????
I google this and it seems due to combinaison ov volblocksize and raidz2
https://forum.proxmox.com/threads/zfs-pool-not-showing-correct-usage.31111/

How can i manage for this ?
 
Last info i found
From my point of view i think that the main issue i encounter is coming from usedbyrefreservation settings :

STORAGE/vm-9003-disk-0 used 6.65T -
STORAGE/vm-9003-disk-0 usedbysnapshots 10.2G -
STORAGE/vm-9003-disk-0 usedbydataset 3.55T -
STORAGE/vm-9003-disk-0 usedbychildren 0B -
STORAGE/vm-9003-disk-0 usedbyrefreservation 3.09T -
STORAGE/vm-9003-disk-0 logicalused 2.01T -
STORAGE/vm-9003-disk-0@rep_Backup_9003_Daily_2020-03-23_22:48:45 used 488M -
STORAGE/vm-9003-disk-0@rep_Backup_9003_Daily_2020-03-24_22:35:15 used 13.7M -
STORAGE/vm-9003-disk-0@rep_Backup_9003_Daily_2020-03-25_22:05:10 used 1.55G -
STORAGE/vm-9003-disk-0@rep_Backup_9003_Daily_2020-03-26_22:27:43 used 1.44G -
STORAGE/vm-9003-disk-0@rep_Backup_9003_Daily_2020-03-28_03:36:02 used 1.48G -
STORAGE/vm-9003-disk-0@rep_Backup_9003_Daily_2020-03-28_23:35:14 used 1.44G


Do you know if i can set this to lower settings ???

KR