Can't figure out the total disk space used

brains

New Member
Dec 16, 2023
4
0
1
Proxmox noob here. I'm trying to learn the software before making the decision if it's worth using to replace my bare metal server. Right now, I have Proxmox running in a VM with 48 GB of space. The issue is I can't find the total disk space used, which is making it hard to allocate disk space to VM's because I run out of room / get io errors. I am not using any fancy setup. I chose the ext4 filesystem during install and no other custom options.

- Under "pve" in the Server View > Disks > LVM - It shows 47.78 GB Size and 5.81 GB free.
- Under "LVM-Thin" - It shows 13.79 GB Size and Used.
- I have a Linux Mint VM allocated 18.68 GB of space and a FreeBSD VM allocated 3 GB of space, but the latter doesn't launch because there is not enough room in Proxmox.
- Under "local (pve)" > ISO images - I have FreeBSD 4.54 GB in Size and Linux Mint 3.03 GB in Size.

I'm trying to find an accurate breakdown of the total space used, but I am getting different numbers everywhere. Also, when I total some of the numbers up, they go over my 48 GB of allocated space to the VM which doesn't make sense to me. If you look at `lvs` the total is 58.59 GB?


Code:
root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  1.9G     0  1.9G   0% /dev
tmpfs                 391M  872K  390M   1% /run
/dev/mapper/pve-root   20G  9.7G  9.2G  52% /
tmpfs                 2.0G   46M  1.9G   3% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/fuse             128M   16K  128M   1% /etc/pve
tmpfs                 391M     0  391M   0% /run/user/0

root@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   5   0 wz--n- <44.50g 5.50g
root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <12.84g             100.00 2.01                           
  root          pve -wi-ao---- <20.34g                                                   
  swap          pve -wi-ao----   3.81g                                                   
  vm-100-disk-0 pve Vwi-a-tz--  18.60g data        65.61                                 
  vm-101-disk-0 pve Vwi-a-tz--   3.00g data        21.17                                 
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/vda3  pve lvm2 a--  <44.50g 5.50g
root@pve:~#
 
Last edited:
Please post console output between CODE-tags. Otherwise we can only guess how the tables should look like...

/dev/mapper/pve-root 20G 9.7G 9.2G 52% /
This means your root filesystem (the only place you can store files/folders. Storage is called "local") is 20G with 9.7G used.

swap pve -wi-ao---- 3.81g
There is a 3.81G swap.

data pve twi-aotz-- <12.84g 100.00 2.01
This is your thin pool (storage called "local-lvm") where you store your VMs/LXCs virtual disks. It only got 12.84G and is 100% full. So the sum of all VMs/LXCs data shouldn't exceed 12.84G.

pve 1 5 0 wz--n- <44.50g 5.50g
5.5G of the VG is unallocated so you could do some snapshots of the root filesystem if you ever need to.
 
Last edited:
Please post console output between CODE-tags. Otherwise we can only guess how the tables should look like...


This means your root filesystem (the only place you can store files/folders. Storage is called "local") is 20G with 9.7G used.


There is a 3.81G swap.


This is your thin pool (storage called "local-lvm") where you store your VMs/LXCs virtual disks. It only got 12.84G and is 100% full. So the sum of all VMs/LXCs data shouldn't exceed 12.84G.


5.5G of the VG is unallocated so you could do some snapshots of the root filesystem if you ever need to.
Sorry about that, just saw the mistake and corrected it. Use to doing backticks.

When you say
storage is called "local"
Do you mean "local (pve)" ? The summary under "pve" HD space is 82.80% (16.44 GiB of 19.85 GiB), whereas the Usage summary under "local (pve)" is 82.80% (17.65 GB of 21.32 GB). One reports in GiB, one in GB which is throwing me off.

So the sum of all VMs/LXCs data shouldn't exceed 12.84G.

This confuses me, because my Linux Mint VM is 18.6 GB. Running df -h / under that VM shows 5.4 GB of free space. If I already exceeded 12.84 GB, why is the VM running just fine?
 
This evening was the first time revisiting Proxmox since my last reply. I think I am putting together the pieces...

I couldn't even boot my Linux Mint VM anymore, as it was throwing an IO error, because local-lvm (pve) was full. Now it works again after deleting my FreeBSD VM, freeing up 3 GB.

So even though I allocated 18 GB to Linux Mint, the actual amount of space I have for all virtual machines, is 13.79 GB. Mint has no way of knowing when local-lvm (pve) aka its real disk space, is full. I could have given it 10 TB, and it still would have failed to launch if local-lvm (pve) was full.

I guess my question is now, how do you handle this in a real world scenario, not a VM inside a VM? If I install Proxmox on an SSD, is it going to use up the whole SSD (seems overkill) or only take up a reasonable amount of space, leaving room for VMs. If the latter, then I would install a few VMs, maybe only 30-40 GB in size, and pass through disks for them to use. Is that how most homelabbers are doing it?
 
I think you might still be a little confused since you think Proxmox using the whole disk won't leave room for VM's. That is false.

A default single-disk install will use the whole disk as a logical volume. A small portion of that LV will be for the PVE root filesystem (mine is 32 GB on a 1 TB disk). Most of the rest of the space will be given to an lvm-thin storage to store VM disks. This is much more flexible than giving each VM a separate disk and in general you do not want to do that.

There are some types of VM that like to have an actual dedicated disk, namely certain NAS software. Even there it usually is not required unless you want to take advantage of special features. I run an NFS server in a VM that serves the home directories for other machines, both real and virtual, and I use an lvm-thin volume formatted ext4 for the data disk.

The PVE manual has a whole section on storage and how it is used. There is a link on the main page or you can use this one:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_storage
 
So even though I allocated 18 GB to Linux Mint, the actual amount of space I have for all virtual machines, is 13.79 GB. Mint has no way of knowing when local-lvm (pve) aka its real disk space, is full. I could have given it 10 TB, and it still would have failed to launch if local-lvm (pve) was full.
That local-lvm is not the disk for that one VM. It is where you put all of the disks for your VM,s. If you had allocated something smaller, say 6 GB to Mint, it would not have been able to use 18 because from its view the disk would have been full. You allocated it more space than you actually had and that is why you ran into trouble. That is legal on thin volumes and make sense in some scenarios but if you do it then you must monitor the space actually used.
 
Thanks for the reply, BobhWasatch!

Running Promox in VM doesn't seem to be a realistic scenario, and me allocating it 40 GB didn't help matters.

Within the next couple of weeks, I am going to be getting some hardware to experiment on, so hopefully things will make more sense once I do a real installation with some drives.