Discrepancy between assigned disk space and "df -h"

Andy97

New Member
Aug 2, 2021
6
3
3
74
Hi,
I hope someone can enlighten me in my scenario:
In Proxmox 7.0-14 I installed an Ubuntu Server 20.04.3 with Disk Space set to 50 GB. I thought initially that would be more than enough space. Got the server running very well for a month or so, however today I found out that the disk was almost full!
(I found a few implementations (Docker) that behave badly so I could adjust that.)
The point is however that I do not understand how much space I actually have.
When I do "df -h" I get

Filesystem Size Used Avail Use% Mounted on
udev 950M 0 950M 0% /dev
tmpfs 199M 3.3M 196M 2% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 24G 16G 6.9G 70% /
tmpfs 994M 0 994M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/loop1 56M 56M 0 100% /snap/core18/2246
/dev/loop2 62M 62M 0 100% /snap/core20/1169
/dev/loop0 56M 56M 0 100% /snap/core18/2128
/dev/loop3 68M 68M 0 100% /snap/lxd/21803
/dev/loop4 68M 68M 0 100% /snap/lxd/21835
/dev/loop6 33M 33M 0 100% /snap/snapd/13640
/dev/loop5 43M 43M 0 100% /snap/snapd/13831
/dev/sda2 976M 203M 707M 23% /boot
tmpfs 199M 0 199M 0% /run/user/1000

With lsblk I get:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55.4M 1 loop /snap/core18/2128
loop1 7:1 0 55.5M 1 loop /snap/core18/2246
loop2 7:2 0 61.9M 1 loop /snap/core20/1169
loop3 7:3 0 67.2M 1 loop /snap/lxd/21803
loop4 7:4 0 67.2M 1 loop /snap/lxd/21835
loop5 7:5 0 42.2M 1 loop /snap/snapd/13831
loop6 7:6 0 32.5M 1 loop /snap/snapd/13640
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 49G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 24.5G 0 lvm /
sr0 11:0 1 1024M 0 rom

In the installation process I used all defaults, but here the usable size for the main disk seems to be only 24 GB, just about half the size of my assigned 50 GB.
Can anyone explain to me why that is so? What am I missing here?
And more importantly: Give advice on strategy on how to set up storage. Let's say that I want to create another Ubuntu Server with 100 GB storage: Should I then assign 200 GB in order to actually having 100 GB when inspecting with "df -h"?
Anyone else having these questions or am I just missing some very obvious setting?
/Andy97
 
Hi,

Did you install the Ubuntu VM with specific size storage?

However, since it's growing and near full you can resize the Ubuntu VM.
Another solution that might be helpful (if you installed the VM on the thin provisioning) you can set the Discard option on the VM disk. See section [Trim/Discard] [0] in our Proxmox VE guide for more information.

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings
 
Hi,

Did you install the Ubuntu VM with specific size storage?
Yes, I set it to 50 GB. I also checked the Discard box, other than that I went on with defaults for the disk settings.
Then I started up (Ubuntu 20.04.3 server) and went through initial setup questions. Concerning the disk settings I went with the defaults all the way through. I have now created a second vm exactly the same way and found that one of the disk options was checked by default: "Set up this disk as an LVM group". So in order to see what difference that made I created a third vm, this time with this option manually unchecked by me. Once both these vm:s were started I checked with "df -h", and the last one then reported Size 49G, and 43G free, which is as expected.
So it's all down to LVM then, which I honestly don't have the understanding of (as yet). Why use LVM and get half the size I requested?.

However, since it's growing and near full you can resize the Ubuntu VM.
That sounds good, but HOW?
Another solution that might be helpful (if you installed the VM on the thin provisioning) you can set the Discard option on the VM disk. See section [Trim/Discard] [0] in our Proxmox VE guide for more information.

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings
 
Hi all,
I just solved the issue by extending the logical volume "sudo lvextend --resizefs -l +50%FREE /dev/mapper/ubuntu--vg-ubuntu--lv". I could perhaps say 100%FREE but I tried first with a small step. "df -h" now reports size 37G instead of 24G on my physical volume of 50G.
What really helped was a magnificently well created video from Jay LaCroix from LearnLinuxTV (Linux Logical Volume Manager (LVM) Deep Dive Tutorial). I learned a lot from that video, thank you Jay!