LVM 100% usage

maaksel

New Member
Mar 31, 2019
3
0
1
36
Los Angeles
Updated to 5.3, and noticed that my disk usage is at 100% for the LVM. Any idea why or how I can correct this? I only have 2 VMs up, 160GB and a 16GB, neither of which are over ~70% full.

upload_2019-3-30_16-5-43.png
 

maaksel

New Member
Mar 31, 2019
3
0
1
36
Los Angeles
So, my friend kind of explained it to me, and showed me the math using the lvs command.

Here is mine: upload_2019-3-31_8-24-59.png

So, I do have 5 volume, even though I thought I only had 2. He said he doesn't know how EXACTLY the math works, but that data is basically a 'close sum' of the vm disks I have.

I made another server, 101, with 20GB, and my LVM usage moved up to 6, but the % stayed the same. I made another, 102, with 20GB and it expectantly moved up to 7, and the % still stayed the same.

He said it's not really a big deal until my data % started getting used up. I am not a strong linux guy, but he reminded me that there is PLENTY of stuff we can clean up/prune/trim from the system since I haven't done anything to it other than upgrades and such. Hope this makes a little more sense at at least putting us down the path of learning. It did for me, but I plan on doing a bit more research on how the math works.
 

JBB

Member
Interesting, but I'm not sure I completely understand. Are you saying that it's not a problem for the LVM data percentage to be at or near 100%?

BTW I see that the output from your lvs command is rather different from mine. You seem to have an lvm-thin storage though while I have plain lvm. But again, I don't know what the implications of this is.

Code:
root@host:~# lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve -wi-ao---- 321.75g                                                   
  root pve -wi-ao----  96.00g                                                   
  swap pve -wi-ao----  31.00g
 

BobhWasatch

Member
Mar 16, 2019
60
7
8
57
So, my friend kind of explained it to me, and showed me the math using the lvs command.

Here is mine: View attachment 9902

So, I do have 5 volume, even though I thought I only had 2. He said he doesn't know how EXACTLY the math works, but that data is basically a 'close sum' of the vm disks I have.
The output of the lvs command shows that you have five logical volumes allocated from the physical volume group "pve". The root and swap volumes are regular volumes that are used for your root filesystem and swap.

Data is an lvm-thin pool (the left-hand 't' in the Attr column means it is a thin pool rather than a regular volume). Your VM disks are allocated from that. So the whole physical volume group is allocated, but a big chunk of it is allocated to the lvm-thin pool from which thin volumes can be sub-allocated.

The thing with lvm-thin is that you can allocate VM disks with some declared maximum size, but they will only take blocks from the pool as they are used. So, for example, your vm-100-disk-1 has a maximum size of 160G but is only using 67% of that right now. The 16G disk is only using 8% of it's allocation and overall the data pool is about 65% used. Note that this means you can over-commit storage. You have 160+16=176G allocated out of 165 available.

The difference with @JBB is his data volume is not thin. It is probably mounted as a filesystem under /var/lib/vz and the VM disk images are files (either raw or qcow2) rather than lvm block devices. The whole size of the volume is considered allocated as far as the volume manager is concerned and you can't over-commit.
 
  • Like
Reactions: JBB

JBB

Member
The difference with @JBB is his data volume is not thin. It is probably mounted as a filesystem under /var/lib/vz and the VM disk images are files (either raw or qcow2) rather than lvm block devices.
That's indeed true in my case.

The whole size of the volume is considered allocated as far as the volume manager is concerned and you can't over-commit.
OK - so I don't need to be worried about running out?
 

BobhWasatch

Member
Mar 16, 2019
60
7
8
57
That's indeed true in my case.

OK - so I don't need to be worried about running out?
Um, maybe. I mis-spoke a bit. If you are using qcow2 format VM disks, they behave similar to lvm-thin in that space isn't acquired until it is used. In that case you can declare a VM disk or group that total a size that is bigger than the actual storage.
 

JBB

Member
OK so what I'm trying to understand is why Proxmox tells me I have about 50% free on /var/lib/vz (where my lvm filesystem is mounted):

Code:
Filesystem            Size  Used Avail Use% Mounted on
udev                   16G     0   16G   0% /dev
tmpfs                 3.2G  306M  2.9G  10% /run
/dev/mapper/pve-root   95G  6.3G   84G   7% /
tmpfs                  16G   37M   16G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                  16G     0   16G   0% /sys/fs/cgroup
/dev/sda2             486M  191M  271M  42% /boot
/dev/mapper/pve-data  317G  160G  157G  51% /var/lib/vz
/dev/fuse              30M   36K   30M   1% /etc/pve
tmpfs                 3.2G     0  3.2G   0% /run/user/1000
.... but then says I've used 97% of /dev/sda3, which is the device dealing with that volume (as far as I know):

Code:
root@host:~$ pvs
  PV         VG  Fmt  Attr PSize   PFree 
  /dev/sda3  pve lvm2 a--  464.75g 16.00g
BTW my disk files (qcow2) all have discard turned on and fstrim running weekly.
 
Last edited:

BobhWasatch

Member
Mar 16, 2019
60
7
8
57
You guys have two different setups. It is likely that you installed Proxmox a while ago and upgraded while his install is newer. Thin provisioning is a relatively new feature.

Since you aren't using thin provisioning, the space for logical volume "data" is pre-allocated. Even if the filesystem on /dev/mapper/pve-data is empty the space is committed at the volume manager layer. That doesn't mean you're out of space for files, it just means that you can't allocate any more block devices or grow an existing one. In your case the relevant free space is the filesystem free space.

With thin provisioning, VM disks are block devices allocated from a thin pool. They are not stored as files in /var/lib/vz but rather the VM's work directly with the logical block device. So in that case the percentage free of the thin pool is what's relevant to how much space you have for VM's.

Two different ways of handling space allocation. Using fstrim is a good idea in either case.
 
  • Like
Reactions: JBB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!