LVM size

lightnet-barry

Active Member
Feb 7, 2017
19
2
43
I'm having issues with VMs on one of my cluster nodes and one thing I am unsure of is that the LVM containing the PVE VG is 93% full:
root@HaPVEamax4:~# pvs PV VG Fmt Attr PSize PFree /dev/sdg3 pve lvm2 a-- <223.07g <16.00g root@HaPVEamax4:~# vgs VG #PV #LV #SN Attr VSize VFree pve 1 3 0 wz--n- <223.07g <16.00g root@HaPVEamax4:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-a-tz-- 140.45g 0.00 1.14 root pve -wi-ao---- 55.75g swap pve -wi-a----- 8.00g
Is this likely to cause any issues, and if so is there anything I can do about it?

Barry
 
Hi,
I'm having issues with VMs on one of my cluster nodes and one thing I am unsure of is that the LVM containing the PVE VG is 93% full:
what kind of issues do you have? Do you mean the file system on pve/root is 93% full, or just that 93% of space in the volume group is assigned to/reserved by logical volumes? The latter one is not a problem. In the below output Data% of the thin pool pve/data is 0% so that storage (on a standard installation local-lvm) is not used at all.
root@HaPVEamax4:~# pvs PV VG Fmt Attr PSize PFree /dev/sdg3 pve lvm2 a-- <223.07g <16.00g root@HaPVEamax4:~# vgs VG #PV #LV #SN Attr VSize VFree pve 1 3 0 wz--n- <223.07g <16.00g root@HaPVEamax4:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-a-tz-- 140.45g 0.00 1.14 root pve -wi-ao---- 55.75g swap pve -wi-a----- 8.00g
Is this likely to cause any issues, and if so is there anything I can do about it?

Barry
 
Hi,

what kind of issues do you have? Do you mean the file system on pve/root is 93% full, or just that 93% of space in the volume group is assigned to/reserved by logical volumes? The latter one is not a problem. In the below output Data% of the thin pool pve/data is 0% so that storage (on a standard installation local-lvm) is not used at all.
Hi Fabian,

I'm having multiple issues, some of which are in other posts.
I have one VM running on this particular host, which take 3-5 seconds to complete a write operation (that said, when I migrated it to another host the write issues did not improve). I also have issues migrating VMs to this host.
The only reason I raised this is that the LVM is flagged as red in the GUI due to % of VG used. You are correct that it has 0% data use.

Barry
 
Hi Fabian,

I'm having multiple issues, some of which are in other posts.
I have one VM running on this particular host, which take 3-5 seconds to complete a write operation (that said, when I migrated it to another host the write issues did not improve). I also have issues migrating VMs to this host.
What storage are you using for the VM, what kind of hardware/guest do you have? Any special options set on the virtual disk? Please also share the output of pveversion -v.

The only reason I raised this is that the LVM is flagged as red in the GUI due to % of VG used. You are correct that it has 0% data use.

Barry
It just means that the space from the volume group is assigned to the logical volumes (not that the logical volumes are actively using as much). You were by far not the only one confused by this ;) I sent a patch to hopefully make this clearer.
 
  • Like
Reactions: Dunuin
What storage are you using for the VM, what kind of hardware/guest do you have? Any special options set on the virtual disk? Please also share the output of pveversion -v.


It just means that the space from the volume group is assigned to the logical volumes (not that the logical volumes are actively using as much). You were by far not the only one confused by this ;) I sent a patch to hopefully make this clearer.
There are no special options set on the VM, though there may be some other issue with it. It's an upgraded radius server, I've pulled the configs as developed on it and will redeploy on a new VM.
Output of pveversion -v:
Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.4.134-1-pve: 4.4.134-112
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.2.6-1-pve: 4.2.6-36
ceph: 14.2.22-pve1
ceph-fuse: 14.2.22-pve1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1

I updated last night. I need to schedule an upgrade to v7 but I was hoping to wait for new hardware and migrate to a new cluster.

Barry
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!