local-lvm slowly filling up space on its own, how to find out what is doing this and stop it?

gama1

New Member
Aug 28, 2022
7
0
1
Hello,

The tittle says it all, my local-lvm is filling up space and I have no idea how or why. It brought my VMs to a stand still as it filled up to 100%, luckily I had an old backup I could delete via ProxMox GUI so that I regained some space and am back online, but I would like to access local-lvm and find out what is doing this and stop it before it fills up again. Perhaps log files? Please see attached image for reference.

I am not running any backups, and so I have no idea what can be causing this. I'm not a linux gury by any means, so am a bit stuck, any help will be massively apprecieated.

Thank you
 

Attachments

  • localLvm.png
    localLvm.png
    194 KB · Views: 55
Hi,
the default local-lvm storage is a thinly-provisioned storage. This means you can provision more space to VMs than is actually available, and VMs will only start using more actual space when they need it. In the output of lvs you have an LSize column for the provisioned size (how much the volume can use at most) and Data% for how much of that size is currently actually used.

If you do provision more space, you will get a warning like
Code:
WARNING: Sum of all thin volume sizes (<1.34 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (<931.01 GiB).
when you e.g. create a new disk. If you do provision more space, you need to be careful that the actual usage stays below what's available to the thin pool (the thin pool is the (abstract) place where the thinly provisioned volumes are allocated/belong to, for local-lvm it's called data).
 
Is Discard enabled on the virtual disks and do the operating systems inside the VMs trim delete blocks? Otherwise even deleted data will take space and the thin provisioning can never shrink (see for example Using fstrim to increase free space in a thin pool LV in man lvmthin).
 
Is Discard enabled on the virtual disks and do the operating systems inside the VMs trim delete blocks? Otherwise even deleted data will take space and the thin provisioning can never shrink (see for example Using fstrim to increase free space in a thin pool LV in man lvmthin).
How can I check this?

For LXC´s on lvm-thin you can manualy trigger trim:
Code:
pct list | awk '/^[0-9]/ {print $1}' | while read ct; do pct fstrim ${ct}; done
Do I run this on the individual VMS, or on the pve?

Thank you !


EDIT: This is my very simple set up (see attached image), if somone could tell me what to execute on what - pve or individual vms? I would very much apprecieate it. Stuff that I can find online seems to be aimd at people far more experienced than I :(


EDIT 2: I checked
systemctl status fstrim.timer
systemctl list-timers fstrim.timer
on my two VMs and also on pve, and it looks like trim is enabled. I still have no idea how to check discard. In any case, seeing that trim appears to be enabled, could this mean that my problem isn't the trim/discard issue?

EDIT3: So I found the discard option under each VM in the hardware section in the ProxMox GUI. It is ticked. I executed the trim command on all my VMs and also the pve with the command fstrim --fstab --verbose but only trival amounts of space were trimmed. So I think the prblem is elsewhere.

As I am searching for the culprit, can I maybe move one of the VMs from local-lvm over to local?(Or is that another complicated or impossible task?)
 

Attachments

  • Screenshot 2022-09-06 at 00.03.27.png
    Screenshot 2022-09-06 at 00.03.27.png
    66.8 KB · Views: 35
Last edited:
How did you resolve? I have the same issue
I didn't, did you?

As far as I can tell, I have discard enabled, I also put fstrim -a on PVE cronjob and also on all my VMs - it runs daily, but the disk space is stillfilling up slowly..... I am at a total loss.
 
You need to check that the complete TRIM chain is working. pct fstrim VMID and fstrim -a per cron on the PVE host. Thin provisioning enabled for your PVEs VM/LXC storage. All guest OSs doing discard (for example fstrim -a for all linux guests). A protocol that supports TRIM so not IDE or virtio block but virtio SCSI. Discard checkbox set for all your virtual disks. A physical disk controller that supports TRIM commands (not all raid controllers do this).

When using ZFS you could run zfs list -o space on rhe PVE host. If discard is working the refreservation should be very low.

Also make sure you got no snapshots as snapshots will prevent freeing up space.
 
Last edited:
You need to check that the complete TRIM chain is working. pct fstrim VMID and fstrim -a per cron on the PVE host. Thin provisioning enabled for your PVEs VM/LXC storage. All guest OSs doing discard (for example fstrim -a for all linux guests). A protocol that supports TRIM so not IDE or virtio block but virtio SCSI. Discard checkbox set for all your virtual disks. A physical disk controller that supports TRIM commands (not all raid controllers do this).

When using ZFS you could run zfs list -o space on rhe PVE host. If discard is working the refreservation should be very low.

Also make sure you got no snapshots as snapshots will prevent freeing up space.
Thank you for the detailed response.

I actually follwoed this guide
https://ardevd.medium.com/slimming-...o-thin-provisioning-with-proxmox-1ac0602c3b34

and when I run the command fstrim -av I now see that it is working and it is telling me that space has been trimmed, but this change does not appear to be reflected as my disk usage is not coming down. In this guide it explained that in order for this to happen, Thin Provisioning has to be enabled in the Datacenter view under Storage. However I have no such option to tick the box on Thin Provisoning, Thin Provisioning is just not an option as in appears in the guide.

I think this is because I am not using ZSF. When I go to the pve view in proxmox gui and go down to disks, the disk that is filling up is listed under LVM and under LVM-Thin. But the ZSF view is empty.

Does this mean that I cannot reclaim this space as thin provisioning can only be enabled on ZSF disks?
Basically, should I be consideirng to rebuilt the deployment to use ZSF and not LVM?

EDIT:
I am trying to format an external drive to use inside proxmox - the idea is that I can use that disk to Create ZSF storage. However, I am uner pve - Disk - ZFS and i click on Create: ZSF but I get No disks Unused. My external drive is not recognised here. How should I format the external drive so that it is recognised?

EDIT2:
I followed this guide https://nubcakes.net/index.php/2019/03/05/how-to-add-storage-to-proxmox/
and managed toget proxmox to recognise my external disk. I still could not create a ZSF pool out of it, as i still get the no disk unused error when trying, but at least i was able to backup the VMs. Would it make sense now that i have a back up to go under the local-lvm that is constantly filling up, remove the disks, delete the vms, create ZSF pool, and restore the backed up VMS to a new ZSF pool? could that work
 
Last edited:
I think this is because I am not using ZSF. When I go to the pve view in proxmox gui and go down to disks, the disk that is filling up is listed under LVM and under LVM-Thin. But the ZSF view is empty.

Does this mean that I cannot reclaim this space as thin provisioning can only be enabled on ZSF disks?
Basically, should I be consideirng to rebuilt the deployment to use ZSF and not LVM?
LVM-Thin is thin-provisioned too, thats why there is a "Thin" in the name. So for LVM-Thin you need to make sure that TRIM/discard is working too. For good old LVM you don't need that when using CMR HDDs, but might still be a good idea when using SSDs or SMR HDDs.
I am trying to format an external drive to use inside proxmox - the idea is that I can use that disk to Create ZSF storage. However, I am uner pve - Disk - ZFS and i click on Create: ZSF but I get No disks Unused. My external drive is not recognised here. How should I format the external drive so that it is recognised?
You have to wipe it first. This can be done using the webUI since PVE 7.X. PVE won't allow you to use it when it's already partitioned, so you don't loose data by accident.
I followed this guide https://nubcakes.net/index.php/2019/03/05/how-to-add-storage-to-proxmox/
and managed toget proxmox to recognise my external disk. I still could not create a ZSF pool out of it, as i still get the no disk unused error when trying, but at least i was able to backup the VMs. Would it make sense now that i have a back up to go under the local-lvm that is constantly filling up, remove the disks, delete the vms, create ZSF pool, and restore the backed up VMS to a new ZSF pool? could that work
I wouldn't change from LVM-Thin to ZFS just because you don't get your discard working. Keep in mind that ZFS will use more RAM, will be slower and kill the disks faster. ZFS should only be used when if you really need the features, care about your data integrity or and you got the right hardware.
 
LVM-Thin is thin-provisioned too, thats why there is a "Thin" in the name. So for LVM-Thin you need to make sure that TRIM/discard is working too. For good old LVM you don't need that when using CMR HDDs, but might still be a good idea when using SSDs or SMR HDDs.

You have to wipe it first. This can be done using the webUI since PVE 7.X. PVE won't allow you to use it when it's already partitioned, so you don't loose data by accident.

I wouldn't change from LVM-Thin to ZFS just because you don't get your discard working. Keep in mind that ZFS will use more RAM, will be slower and kill the disks faster. ZFS should only be used when if you really need the features, care about your data integrity or and you got the right hardware.
Thank you for the clarifications. Unfortunetely I feel like I'm at the end of the line here, I spent hours over multiple days on this and I only managed to get the fstrim command to run and show trimmed space, but that does not get reflected as the used disk space does not drop.

To recap
my LVM is almost full and filling up. The hard disks under VM Hardware view are scsi, Discard option is ticked, SSD Emulation ticked. Trim command does not appear to have any effect even tho cli shows how much GB it trimmed.

Any other ideas beofre I throw in the towel and start paying for cloud hosted solutions for the apps currently on my ProxMox enviroment? :(
 
Last edited:
What hardware are you using (especially what disk controller)?

What is vgdisplay and lvdisplay returning?
 
The hardware is Dell Optiplex 3080 small desktop PC. Not sure how to specifically check what the disk controller is

vgdisplay
root@pve:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 302
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size 237.97 GiB
PE Size 4.00 MiB
Total PE 60921
Alloc PE / Size 56827 / 221.98 GiB
Free PE / Size 4094 / 15.99 GiB
VG UUID RLhqpJ-7cSu-iTCY-eMZ3-VAFr-Bhnt-gxKUDQ






lvdisplay
root@pve:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID iLTEdp-3ZGE-388L-yCrw-ZClq-dk1P-os5g4U
LV Write Access read/write
LV Creation host, time proxmox, 2022-03-07 23:03:29 +0000
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID K3NQ11-xKIx-kSix-3HQH-bKhH-4mke-FySUe3
LV Write Access read/write
LV Creation host, time proxmox, 2022-03-07 23:03:29 +0000
LV Status available
# open 1
LV Size 59.25 GiB
Current LE 15168
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID LRTpzv-z2XG-ayib-q0gI-M0mf-fWYH-rHfee6
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2022-03-07 23:03:35 +0000
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size <151.63 GiB
Allocated pool data 92.07%
Allocated metadata 5.46%
Current LE 38817
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5

--- Logical volume ---
LV Path /dev/pve/vm-103-disk-0
LV Name vm-103-disk-0
VG Name pve
LV UUID Ix7MGo-wJR5-mDCg-89zz-PzLd-3XIs-yuipbK
LV Write Access read/write
LV Creation host, time pve, 2022-03-09 16:34:42 +0000
LV Pool name data
LV Status available
# open 1
LV Size 40.00 GiB
Mapped size 84.45%
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

--- Logical volume ---
LV Path /dev/pve/vm-103-disk-1
LV Name vm-103-disk-1
VG Name pve
LV UUID rBie63-8Zp4-GTkz-nLTg-98Yu-wMlG-m2xmsf
LV Write Access read/write
LV Creation host, time pve, 2022-03-09 16:34:43 +0000
LV Pool name data
LV Status available
# open 1
LV Size 100.00 GiB
Mapped size 90.57%
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-0
LV Name vm-100-disk-0
VG Name pve
LV UUID fOf3DT-r0VH-qjF9-pCbF-anrX-U2mW-0Nolts
LV Write Access read/write
LV Creation host, time pve, 2022-03-10 14:37:21 +0000
LV Pool name data
LV Status available
# open 1
LV Size 64.00 GiB
Mapped size 23.84%
Current LE 16384
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!