ZFS Mount showing less size than actual zpool size

bunnypranav

New Member
May 3, 2024
13
0
1
Hello,
I have a very peculiar case of which I did not find any solution elswhere. I have a Proxmox VE 8.2.4 running on my homelab server. I have a 256 SSD as my boot drive and one 620GB HDD for some of my data drive. The HDD is configured as a zfs pool (ID: WdcZfs) with the entire 620GB for its size. Image: 1724482301906.png
I have mounted the zfs pool to path /mnt/WdcZfs and imported into proxmox with name WdcBackups. Image:
1724482479536.png
But, as you might have already spotted, the mounted directory only shows a size of 410GB, compared to the 620GB of my ZFS pool. I have tried unmounting and mounting it again but with no luck. Till about a couple of weeks ago it was working fine with a full size of 620GB, but recently, it suddenly got shrunk to 410GB. Some code outputs are given below for your reference.

#mount:
Bash:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16346088k,nr_inodes=4086522,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3275880k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=6390)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
/dev/sdb2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
/dev/sdc1 on /mnt/pve/wdtb type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
overlay on /var/lib/docker/overlay2/55b6361f85416a93cd686ddedd9eb9e74da30fe2d0aa8aff72832feef36fa86d/merged type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/LKTT3D73OTH2OKLJJFCENKGRW5:/var/lib/docker/overlay2/l/AQ326LCWGYDID2S6MWNW6UOWXJ:/var/lib/docker/overlay2/l/IJBYZ3WNHMGDUU4WSG2YGUQUJN:/var/lib/docker/overlay2/l/4B2WXY7UL3EAPVKMGSB3XEVANK:/var/lib/docker/overlay2/l/RR5A3KKFN4QLTGBXZBHRW6OI7Q:/var/lib/docker/overlay2/l/4NRNMFJCEAZRU6PQGGR5XQVXWR:/var/lib/docker/overlay2/l/NDPVSQRXRGOC6AH54UENYGTPB7:/var/lib/docker/overlay2/l/Z5H56NBJJ3LTNJGWSDHMQHMWP4:/var/lib/docker/overlay2/l/ZN65KGQNUTJ4YVGWBK324V6KPB:/var/lib/docker/overlay2/l/Y4UGLCSEKLZ2YBQJWLWFMWHX7A,upperdir=/var/lib/docker/overlay2/55b6361f85416a93cd686ddedd9eb9e74da30fe2d0aa8aff72832feef36fa86d/diff,workdir=/var/lib/docker/overlay2/55b6361f85416a93cd686ddedd9eb9e74da30fe2d0aa8aff72832feef36fa86d/work,nouserxattr)
nsfs on /run/docker/netns/f7fb68638292 type nsfs (rw)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3275876k,nr_inodes=818969,mode=700,inode64)
WdcZfs on /mnt/WdcZfs type zfs (rw,relatime,xattr,noacl,casesensitive)

# df -h
Bash:
Filesystem            Size  Used Avail Use% Mounted on
udev                   16G     0   16G   0% /dev
tmpfs                 3.2G  1.4M  3.2G   1% /run
/dev/mapper/pve-root   44G   31G   12G  74% /
tmpfs                  16G   34M   16G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
efivarfs              128K   64K   60K  52% /sys/firmware/efi/efivars
/dev/sdb2            1022M   12M 1011M   2% /boot/efi
/dev/sdc1             932G  6.6G  925G   1% /mnt/pve/wdtb
/dev/fuse             128M   24K  128M   1% /etc/pve
overlay                44G   31G   12G  74% /var/lib/docker/overlay2/55b6361f85416a93cd686ddedd9eb9e74da30fe2d0aa8aff72832feef36fa86d/merged
tmpfs                 3.2G     0  3.2G   0% /run/user/0
WdcZfs                383G  300G   83G  79% /mnt/WdcZfs

# zfs list
Bash:
NAME                   USED  AVAIL  REFER  MOUNTPOINT
WdcZfs                 495G  83.0G   299G  /mnt/WdcZfs
WdcZfs/vm-101-disk-0   130G   167G  45.6G  -
WdcZfs/vm-104-disk-0     3M  83.0G    92K  -
WdcZfs/vm-104-disk-1  65.0G   139G  9.13G  -
WdcZfs/vm-104-disk-2     6M  83.0G    64K  -

# zpool list
Bash:
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
WdcZfs   596G   354G   242G        -         -     0%    59%  1.00x    ONLINE  -

# zpool status
Bash:
  pool: WdcZfs
 state: ONLINE
  scan: scrub repaired 0B in 01:51:35 with 0 errors on Sun Aug 11 02:15:36 2024
config:

        NAME                      STATE     READ WRITE CKSUM
        WdcZfs                    ONLINE       0     0     0
          wwn-0x50014ee657db94a2  ONLINE       0     0     0

errors: No known data errors

# zfs get all WdcZfs
Bash:
NAME    PROPERTY              VALUE                  SOURCE
WdcZfs  type                  filesystem             -
WdcZfs  creation              Fri Feb 16 19:56 2024  -
WdcZfs  used                  495G                   -
WdcZfs  available             83.0G                  -
WdcZfs  referenced            299G                   -
WdcZfs  compressratio         1.02x                  -
WdcZfs  mounted               yes                    -
WdcZfs  quota                 none                   default
WdcZfs  reservation           none                   default
WdcZfs  recordsize            128K                   default
WdcZfs  mountpoint            /mnt/WdcZfs            local
WdcZfs  sharenfs              off                    default
WdcZfs  checksum              on                     default
WdcZfs  compression           on                     local
WdcZfs  atime                 on                     default
WdcZfs  devices               on                     default
WdcZfs  exec                  on                     default
WdcZfs  setuid                on                     default
WdcZfs  readonly              off                    default
WdcZfs  zoned                 off                    default
WdcZfs  snapdir               hidden                 default
WdcZfs  aclmode               discard                default
WdcZfs  aclinherit            restricted             default
WdcZfs  createtxg             1                      -
WdcZfs  canmount              on                     default
WdcZfs  xattr                 on                     default
WdcZfs  copies                1                      default
WdcZfs  version               5                      -
WdcZfs  utf8only              off                    -
WdcZfs  normalization         none                   -
WdcZfs  casesensitivity       sensitive              -
WdcZfs  vscan                 off                    default
WdcZfs  nbmand                off                    default
WdcZfs  sharesmb              off                    default
WdcZfs  refquota              none                   default
WdcZfs  refreservation        none                   default
WdcZfs  guid                  15889226908735833203   -
WdcZfs  primarycache          all                    default
WdcZfs  secondarycache        all                    default
WdcZfs  usedbysnapshots       0B                     -
WdcZfs  usedbydataset         299G                   -
WdcZfs  usedbychildren        195G                   -
WdcZfs  usedbyrefreservation  0B                     -
WdcZfs  logbias               latency                default
WdcZfs  objsetid              54                     -
WdcZfs  dedup                 off                    default
WdcZfs  mlslabel              none                   default
WdcZfs  sync                  standard               default
WdcZfs  dnodesize             legacy                 default
WdcZfs  refcompressratio      1.00x                  -
WdcZfs  written               299G                   -
WdcZfs  logicalused           362G                   -
WdcZfs  logicalreferenced     299G                   -
WdcZfs  volmode               default                default
WdcZfs  filesystem_limit      none                   default
WdcZfs  snapshot_limit        none                   default
WdcZfs  filesystem_count      none                   default
WdcZfs  snapshot_count        none                   default
WdcZfs  snapdev               hidden                 default
WdcZfs  acltype               off                    default
WdcZfs  context               none                   default
WdcZfs  fscontext             none                   default
WdcZfs  defcontext            none                   default
WdcZfs  rootcontext           none                   default
WdcZfs  relatime              on                     default
WdcZfs  redundant_metadata    all                    default
WdcZfs  overlay               on                     default
WdcZfs  encryption            off                    default
WdcZfs  keylocation           none                   default
WdcZfs  keyformat             none                   default
WdcZfs  pbkdf2iters           0                      default
WdcZfs  special_small_blocks  0                      default
WdcZfs  prefetch              all                    default


Thanks in advance for all the help!
 
That's the fun when working with block and file storage and even different "GB" vs "GiB" ...
As zpool list shows it's formatted 596G in size and you already used about 200G for vm-101-disk-0 and vm-104-disk-1 (zvol block) shown in zfs list.
df -h show than just free pool space as dataset size as 383G in 1024 format while in the pve ui you see it as 410G in 1000 format (like df -H).
So everythink is green or get barbie like glasses for pink :cool:
 
Last edited:
That's the fun when working with block and file storage and even different "GB" vs "GiB" ...
As zpool list shows it's formatted 596G in size and you already used about 200G for vm-101-disk-0 and vm-104-disk-1 (zvol block) shown in zfs list.
df -h show than just free pool space as dataset size as 383G in 1024 format while in the pve ui you see it as 410G in 1000 format (like df -H).
So everythink is green or get barbie like glasses for pink :cool:
GB and GiB is understood, but in the directory shouldn''t the total space shown be full 620GB, as though I have alloted 200G for the vm disks, I am using the fill amount. If I mount the ZFS to a directory, I should be able to use the full 620GB right?
 
No as your mounted dataset could just be in max size as available in your pool which is decreased by the vm disks in zvols from the pool.
 
No as your mounted dataset could just be in max size as available in your pool which is decreased by the vm disks in zvols from the pool.
but then for about 6-7 months I got the full 620GB of storage for the directory, from only the past week it is showing 420GB
 
You can migrate your vm's from zvol to your zfs dataset and boom ... have again 600GB filesystem mount but as the used amount is the same as even the zpool size that will nothing change in reality as then you just move the line between block and file storage inside your pool.
 
You can migrate your vm's from zvol to your zfs dataset and boom ... have again 600GB filesystem mount but as the used amount is the same as even the zpool size that will nothing change in reality as then you just move the line between block and file storage inside your pool.
Thanks for the suggestion. I would be very grateful if you could provide some basic instructions on how to do that.
 
Inside pve select on the left side your vm, then select hardware on the right, select your hard drive, select disk action on top, select move storage on pull down menue, select your new destination for the vm.
 
Inside pve select on the left side your vm, then select hardware on the right, select your hard drive, select disk action on top, select move storage on pull down menue, select your new destination for the vm.
thanks a lot!
 
Inside pve select on the left side your vm, then select hardware on the right, select your hard drive, select disk action on top, select move storage on pull down menue, select your new destination for the vm.
Hello again, in the move section of my storage in my vm I can only see these three items:1725198021910.png
BTW the wdtb is another 1TB drive I have attached, not related. You said I can move my storages from zvol to zfs, but they are already in my ZFS (I belive), and I cannot see any other form of storage. What confuses me the most is that I have mounted a 620GB zfs pool to a directory, /mnt/WdcZfs, and the directory has only 420GB. Even weirder, I had a full 620 GB dir until very recently, I only noticed the reduce when my backups were failing due to lack of space. Any suggestions on this issue? Please ask me to send any command outputs if they will help.

Thanks
 
You mounted your zpool WdcZfs under /mnt/WdcZfs but you haven't create a "conventional zfs dataset" inside yet.
So in shell do "zfs create <my_wish_name>" (look into zfs docu for options but first not needed) a dataset
and after that define that in dataset as additional storage as "dir". Then move your vm as in your previous post into the then fourth new option.
 
You mounted your zpool WdcZfs under /mnt/WdcZfs but you haven't create a "conventional zfs dataset" inside yet.
So in shell do "zfs create <my_wish_name>" (look into zfs docu for options but first not needed) a dataset
and after that define that in dataset as additional storage as "dir". Then move your vm as in your previous post into the then fourth new option.
Got it,
Now similar issue, would be grateful if you could provide a suggestion on this. Due to other reasons, I did a complete clean install of proxmox. I currently have the Wdc Zfs drives removed. I only have one 256GB boot drive, this time configured as a zfs (raid 0).

Code:
root@fabserver:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             1.42G   227G   104K  /rpool
rpool/ROOT        1.42G   227G    96K  /rpool/ROOT
rpool/ROOT/pve-1  1.42G   227G  1.42G  /
rpool/data          96K   227G    96K  /rpool/data
rpool/var-lib-vz    96K   227G    96K  /var/lib/vz
root@fabserver:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   236G  1.42G   235G        -         -     0%     0%  1.00x    ONLINE  -

1727112985743.png
1727113044171.png

For some reason, the 256gb hardrive shows a 253GB the zfs disk section (pretty accurate), but just 244gb in the storage section of the web dashboard. Is the reason similar, and what is the solution for regaining this 10 or so GB.

Thanks in any advance for the help.
 
see lsblk, is this drive a boot medium?
Yes, this drive is the boot medium.

Code:
root@fabserver:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:16   0 238.5G  0 disk
├─sda1   8:17   0  1007K  0 part
├─sda2   8:18   0     1G  0 part
└─sda3   8:19   0 237.5G  0 part

Also , just realised, this is cause by GB to GiB right?
But then why is the zfs pool 10G less the the drive capacity
 
Last edited:
That's the "local-zfs" 244G with 10G less shown in gui as part of the rpool 253G in size, the other differencies are these "GB" vs. "GiB" depictions.
Understood, my question is why is it 10G less. Through other research I found out it might be due to the zfs reserve caching or some other internal usage, is that right?
 
And "df -h" - will make it more clear ?
Code:
root@fabserver:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev               16G     0   16G   0% /dev
tmpfs             3.2G 1000K  3.2G   1% /run
rpool/ROOT/pve-1  229G  1.5G  228G   1% /
tmpfs              16G   46M   16G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
efivarfs          128K  119K  4.6K  97% /sys/firmware/efi/efivars
rpool/var-lib-vz  228G  128K  228G   1% /var/lib/vz
rpool             228G  128K  228G   1% /rpool
rpool/data        228G  128K  228G   1% /rpool/data
rpool/ROOT        228G  128K  228G   1% /rpool/ROOT
/dev/fuse         128M   16K  128M   1% /etc/pve
tmpfs             3.2G     0  3.2G   0% /run/user/0
 
Mmh, 1.5GB in "/" but nevertheless all is fine and I don't be in a hurry about some GB's just missing for any explanation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!