Hi,
I'm well aware that zpool list and zfs list are reporting different numbers,
because zpool looks at the raw data on disk while zfs reports logical numbers.
However my numbers are so weird, I cannot take them for being correct
or making sense. I have two pools 'data' and 'vms' and both are pretty
empty in terms of actually used blocks, but are pretty much filled up reported
by zfs list and pvesm status. Here are the numbers:
I actually had to remove a lot of snapshots on vms to bring it down from 98,1% capacity to 75%
because replication failed with out of space errors. But as you can see there are almost 700GB of
unused physical blocks on vms.
Accounting overhead, fragmenation, reserveration and some other zfs tricks I would
think some 10-20% difference should be reasonable, but getting out of space with 700GB
of free blocks seems to be weird.
I already tried zpool trim on both pools, but it did not really help.
Any ideas or explanations?
Regards
MH
I'm well aware that zpool list and zfs list are reporting different numbers,
because zpool looks at the raw data on disk while zfs reports logical numbers.
However my numbers are so weird, I cannot take them for being correct
or making sense. I have two pools 'data' and 'vms' and both are pretty
empty in terms of actually used blocks, but are pretty much filled up reported
by zfs list and pvesm status. Here are the numbers:
pvesm status tells: almost nothing left on data (78 GB) and vms (221 GB)
Code:
root@p0:~# pvesm status
Name Type Status Total Used Available %
data zfspool active 942931968 864723628 78208340 91.71%
local dir active 36172160 2832000 33340160 7.83%
local-zfs zfspool active 33340380 96 33340284 0.00%
vms zfspool active 894173184 672394300 221778884 75.20%
xbu nfs active 2512586752 403580928 2109005824 16.06%
zpool list tells: lots of free space on data (611 GB) and vms (686 GB)
Code:
root@p0:~# zpool list -v data vms
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 928G 317G 611G - - 0% 34% 1.00x ONLINE -
nvme-WD_BLACK_SN7100_2TB_25166K806980-part5 932G 317G 611G - - 0% 34.1% - ONLINE
vms 880G 194G 686G - - 9% 22% 1.00x ONLINE -
mirror-0 880G 194G 686G - - 9% 22.1% - ONLINE
nvme-CT1000P3PSSD8_24464C129F1A_1-part4 882G - - - - - - - ONLINE
nvme-WD_BLACK_SN7100_2TB_25166K806980_1-part4 882G - - - - - - - ONLINE
zfs list tells (as pvesm status): almost no free space on data (74 GB) and vms (212 GB)
Code:
root@p0:~# zfs list -t all | grep -v rpool
NAME USED AVAIL REFER MOUNTPOINT
data 825G 74.6G 96K /data
data/vm-103-disk-0 825G 582G 317G -
data/vm-103-disk-0@__replicate_103-0_1761048900__ 192K - 317G -
vms 641G 212G 104K /vms
vms/basevol-106-disk-1 968M 19.1G 962M /vms/basevol-106-disk-1
vms/basevol-106-disk-1@__base__ 6.68M - 962M -
vms/basevol-106-disk-1@__replicate_106-0_1761048903__ 0B - 962M -
vms/subvol-107-disk-0 3.73G 16.3G 3.67G /vms/subvol-107-disk-0
vms/subvol-107-disk-0@__replicate_107-0_1761048905__ 58.5M - 3.73G -
vms/vm-100-disk-0 23.4G 232G 3.05G -
vms/vm-100-disk-0@__replicate_100-0_1761049500__ 0B - 3.05G -
vms/vm-101-disk-0 76.4G 262G 25.7G -
vms/vm-101-disk-0@__replicate_101-0_1761049502__ 0B - 25.7G -
vms/vm-102-disk-0 74.1G 262G 23.3G -
vms/vm-102-disk-0@__replicate_102-0_1761048900__ 0B - 23.3G -
vms/vm-102-disk-1 140G 313G 38.3G -
vms/vm-102-disk-1@__replicate_102-0_1761048900__ 0B - 38.3G -
vms/vm-103-disk-0 58.7G 252G 18.1G -
vms/vm-103-disk-0@__replicate_103-0_1761048900__ 35.7M - 18.1G -
vms/vm-104-disk-0 81.1G 262G 30.4G -
vms/vm-104-disk-0@__replicate_104-0_1761048904__ 17.2M - 30.4G -
vms/vm-105-disk-0 45.4G 242G 15.0G -
vms/vm-105-disk-0@__replicate_105-0_1761049200__ 17.0M - 15.0G -
vms/vm-105-disk-1 137G 313G 35.7G -
vms/vm-105-disk-1@__replicate_105-0_1761049200__ 1.88M - 35.7G -
I actually had to remove a lot of snapshots on vms to bring it down from 98,1% capacity to 75%
because replication failed with out of space errors. But as you can see there are almost 700GB of
unused physical blocks on vms.
Accounting overhead, fragmenation, reserveration and some other zfs tricks I would
think some 10-20% difference should be reasonable, but getting out of space with 700GB
of free blocks seems to be weird.
I already tried zpool trim on both pools, but it did not really help.
Any ideas or explanations?
Regards
MH
Last edited: