Baffled by space usage / reporting

colinstu

New Member
Jul 29, 2023
9
1
3
Nothing seems to add up anywhere, I know zfs space reporting correctly is it's own situation, but please let me know if I'm actually in danger or not and how I can free up some space if it is a problem.

zfs under Disks seems to be happy, (I have 2x 2TB nvme drives in a zfs mirror). Only a lil over 50% used.
Screenshot 2025-04-12 at 23.08.34.png

Under Disks I see everything, but looking at my boot drive specifically (that holds proxmox install) (remember this 256GB value):
Screenshot 2025-04-12 at 23.04.16.png

What does it mean by "Number of LVs" and that high amount of usage? The only thing that should be even on my boot drive should the proxmox host itself.
Screenshot 2025-04-12 at 23.04.57.png

I don't use LVM-thin (AFAIK) yet I have one here with a size of "151GB". Can I tell if anything is actually using it? Can I delete this?
Screenshot 2025-04-12 at 23.07.10.png

As far as the Summary screen for the entire proxmox host itself it thinks it's only using 7GB of 70GB? 10%? Huh?
Screenshot 2025-04-12 at 23.10.33.png

df is reporting ... whatever is going on here. Pretty sure I can ignore all the zfs output correct? And pve-root lines up with the findings from the Summary screen.
Ignore sdb1 and the network share, those are used for backup purposes.
Screenshot 2025-04-12 at 23.12.42.png

zpool list and zfs list output:
zpool seems to match up with the zfs UI findings above (and assuming some difference in reporting due to TiB/GiB vs TB/GB here?)
Why does zfs list an even smaller available amount of free space?
Screenshot 2025-04-12 at 23.29.54.png

I did discover that under Datacenter > Storage > local-zfs I did not have thin provision checked. I have since checked it.
Is there a way to convert (or restore) previously thick provisioned disks into thin ones to free space further?

Thank you for any help.
 

Attachments

  • Screenshot 2025-04-12 at 23.00.39.png
    Screenshot 2025-04-12 at 23.00.39.png
    52.8 KB · Views: 1
  • Screenshot 2025-04-12 at 23.19.57.png
    Screenshot 2025-04-12 at 23.19.57.png
    377.3 KB · Views: 0
  • Screenshot 2025-04-12 at 23.29.54.png
    Screenshot 2025-04-12 at 23.29.54.png
    375.8 KB · Views: 0
Hi,

What does it mean by "Number of LVs" and that high amount of usage? The only thing that should be even on my boot drive should the proxmox host itself.
this is the overview for LVM volumes. By default, the installer (when choosing ext4 or xfs as filesystem) creates a LVM partition (as you saw), on top of that it then allocates a root volume and data pools, one of them as thin-provisioned for the actual disk images. If you are not explicitly setting any advanced size parameter during the installation, it will use up the entire LVM partition size.

As far as the Summary screen for the entire proxmox host itself it thinks it's only using 7GB of 70GB? 10%? Huh?
This means the root LVM volume is 70 GiB in size. The installer has some defaults for the volumes sizes, see also Advanced LVM Configuration Options in our admin guide. For a normal installation with all data on other volumes/disks, this is a pretty typical value.

Why does zfs list an even smaller available amount of free space?
Free space calculation for CoW filesystems is not trivial. I'd suggest to also look at REFER using e.g. zfs list -o space,refer.

Overall everything seems fine and as it should be. There are lots of threads already in the forum about both of these things, some with very detailed answers. So you should be able to find even more information on that way :)