question about ZFS UI vs. cmdline

kriznik

Member
Sep 29, 2023
34
2
8
I'm wondering from where takes UI it's informations?
because pool shows me this:

Code:
root@ragnar:~# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ZFS   36.4T  5.72T  30.7T        -         -     1%    15%  1.00x    ONLINE  -
root@ragnar:~#

but UI shows this:
1706779745957.png

1706779855539.png

what would be the cause? eg. can it be somehow forced to update regarding reality?
 
Last edited:
There are two main commands to use with ZFS. "zpool" and "zfs". "zpool" will show sizes including parity data. "zfs" will show sizes with parity data already subtracted and will also account for stuff like quotas. So your outputs might be very different depeding if you run "zpool list" vs "zfs list".
 
thanks for pointing this out
well more confusion here tho

looks like some of the disks have bigger "avail" size than actual dedicated size.
for example looks like vm-300-disk-1 has got 6T alocated, used 562G (if I'm reading this correctly) but actual disk is defined in proxmox as 500GB
1706785437663.png
vm-300-disk-0 and vm-700-disk-0 looks suspicious as well

Code:
root@ragnar:~# zfs list                                                                                                                            
NAME                    USED  AVAIL  REFER  MOUNTPOINT                                                                                            
ZFS                    20.2T  5.58T   222K  /ZFS                                                                                                  
ZFS/subvol-102-disk-0  74.3G   676G  74.3G  /ZFS/subvol-102-disk-0                                                                                
ZFS/subvol-113-disk-1  2.55G  17.5G  2.55G  /ZFS/subvol-113-disk-1                                                                                
ZFS/subvol-666-disk-0   579M  1.43G   579M  /ZFS/subvol-666-disk-0                                                                                
ZFS/vm-300-disk-0      19.5T  21.2T  3.92T  -                                                                                                      
ZFS/vm-300-disk-1       562G  6.07T  51.0G  -                                                                                                      
ZFS/vm-700-disk-0      34.3G  5.60T  9.78G  -                                                                                                      
root@ragnar:~#

1706785306821.png
1706785322060.png
 
Last edited:
My guess would be that you are using a raidz1/2 without increasing the volblocksize. Then it wouldn't be uncommon that storing something like 300GB on a VMs virtual disk would consume something like 562GB of actual space on the pool. Search this forum for "padding overhead".
 
I think I've read that here somewhere, maybe from you even :), so I'm using raidz2-0 with 64K blocks in this pool.
It's still bit confusing what's what and what to do better in the configuration tho.

I'm not very sure what AVAIL refers to honestly and why for each disk it differs while on one pool and how AVAIL can be higher than reserved

Code:
root@ragnar:~# zfs list -o space                                                                                                                 
NAME                   AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD                                                                   
ZFS                    5.53T  20.2T        0B    222K             0B      20.2T                                                                   
ZFS/subvol-102-disk-0   676G  74.3G        0B   74.3G             0B         0B                                                                   
ZFS/subvol-113-disk-1  17.5G  2.55G        0B   2.55G             0B         0B                                                                   
ZFS/subvol-666-disk-0  1.43G   579M        0B    579M             0B         0B                                                                   
ZFS/vm-300-disk-0      21.1T  19.5T     2.26G   3.92T          15.6T         0B                                                                   
ZFS/vm-300-disk-1      6.02T   562G     2.28G   51.1G           508G         0B                                                                   
ZFS/vm-700-disk-0      5.55T  34.3G        0B   9.78G          24.5G         0B                                                                   
ZFS/vm-701-disk-0      5.55T  34.3G        0B   12.0G          22.2G         0B                                                                   
ZFS/vm-750-disk-0      5.54T  17.1G        0B   1.83G          15.3G         0B                                                                   
root@ragnar:~#

from above I'd assume reserved space in the pool is around 17TB and used roughly 5TB
which does not match 22TB out of 28TB stated in the UI
 
Last edited: