[SOLVED] Node HD Space (Summary Tab) and Node Device Total Storage (Disks Tab) Do Not Match

edd189

New Member
May 23, 2024
4
0
1
Why don't these add up? This is one node and it has a 250GB NVMe and a 4TB SATA SSD. It is reporting 66.35 GiB total, 97% full. I am getting some odd errors (occasional VM lockup) and I am trying to eliminate this as a cause.

2025_04_14_23_33_57_pve4_Proxmox_Virtual_Environment.png2025_04_14_23_34_46_pve4_Proxmox_Virtual_Environment.png
 
The '/ HD space' is the root filesystem, which is only a (small) part of the LVM on your NVMe.
You are probably putting the VM virtual disks as files on the root filesystem instead of putting them on the LVM(-Thin) or the ZFS, which is a common mistake (for people coming from other hypervisors). Lots of threads on this forum about the root filesystem becoming full due to backups or VM disks as files being put there by accident.
 
  • Like
Reactions: Johannes S
Just to add to leesteken, if you want to get a general overview of what storage/s are available/usage/quantity; look at the bottom (drop-down) of the node (left-column GUI) & they will be listed there, together with all pertinent info.
 
  • Like
Reactions: Johannes S
Just one thing to add - that consumer-grade CT4000MX500SSD1 seems to have a ~0.14 DWPD (over 5 years), that is pretty low for server storage (+ ZFS, but seemingly you have no raid). I notice you are showing n/a on Wearout, I'm not sure if this is concerning - but maybe you should check the actual smart values.

Your 250GB 960 EVO fairs a little better at ~0.22 DWPD (over 5 years) - but even that is not very good.

Even in a home server, you had better make sure you have full (external) backups of all your data, if you intend keeping it!
 
Looks like it may have been a setup mistake on my part. I used only a portion of my 250GB NMVe for the system/root. The rest I had at one point provisioned for an LVM-thin disk, but had since deleted it since it was unused.

Could not find a thing on the rest of the disk in use (VMs are on the ZFS disk and backups are on an SMB share). So I ran the below code and am able to use the whole disk.

Code:
lvextend -l +100%FREE /dev/pve/root
resize2fs /dev/pve/root
 
Could not find a thing on the rest of the disk in use
This is surprising as 64.36 GiB for just the Proxmox OS/root is a colossal amount. (Mine for example is 14G & I believe that is on the "big" side).

I suspect you missed something - maybe a backup was done when that SMB was unavailable? Some ISOs Templates etc.

Maybe try running
Code:
du -h -x -d1 /
 
This is what is returned after running the command above. These do not add up to 64.36GiB.

16K /lost+found
412M /boot
172K /tmp
5.0M /etc
4.0K /opt
1.7G /var
4.0K /srv
8.0K /mnt
64K /root
4.7G /usr
4.0K /home
4.0K /media
6.8G /
root@pve1:~# ^C
root@pve1:~#
 
I suspect you missed something - maybe a backup was done when that SMB was unavailable? Some ISOs Templates etc.
Exactly! I unmounted the SMB share, rebooted, and found several backup files in the same directory name, but they were local and not shares. Good call. Cleaned those up and now I have:

HD space 3.06% (6.73 GiB of 219.81 GiB
 
I ran the below code
lvextend -l +100%FREE /dev/pve/root
resize2fs /dev/pve/root
Don't do this. There's a reason some space is unallocated. You will have a hard(er) time fixing your system if you ever make a mistake in overallocation and your thin pool reaches 100%.
 
Last edited: