[SOLVED] Local-lvm full - only one vm

twotimes2

New Member
Sep 26, 2024
2
0
1
Hi.
I've recently got an issue with one vm (Home Assistant) filling up my disk.
I thought that I'd fix the issue, but now my local-lvm is full, and as a result, can't start my vm.
Any way to clean the disk from within pve?
I have my home assistant backups on another server, so a complete reinstall is on the table.
I still don't know why the disk is filling up, as I would say home assistant shouldn't fill it up.


/etc/pve/storage.cfg:

dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images


lsblk:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 118.7G 0 part
├─pve-swap 253:0 0 5G 0 lvm [SWAP]
├─pve-root 253:1 0 29.5G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data-tpool 253:4 0 67.5G 0 lvm
│ ├─pve-data 253:5 0 67.5G 1 lvm
│ ├─pve-vm--100--disk--1 253:6 0 4M 0 lvm
│ └─pve-vm--100--disk--2 253:7 0 111G 0 lvm
└─pve-data_tdata 253:3 0 67.5G 0 lvm
└─pve-data-tpool 253:4 0 67.5G 0 lvm
├─pve-data 253:5 0 67.5G 1 lvm
├─pve-vm--100--disk--1 253:6 0 4M 0 lvm
└─pve-vm--100--disk--2 253:7 0 111G 0 lvm


df -h:

Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 586M 1.2M 585M 1% /run
/dev/mapper/pve-root 29G 6.9G 21G 26% /
tmpfs 2.9G 43M 2.9G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/fuse 128M 16K 128M 1% /etc/pve
tmpfs 586M 0 586M 0% /run/user/0


vgdisplay:

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 46
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size <118.74 GiB
PE Size 4.00 MiB
Total PE 30397
Alloc PE / Size 26621 / <103.99 GiB
Free PE / Size 3776 / 14.75 GiB
VG UUID 4hDdiz-HunL-s10e-YlRJ-qiUU-hAgN-iVCHc4


lvdisplay:

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID uGU7Dj-sxB4-hAQN-NZ0J-NOhr-jUqm-MF61e8
LV Write Access read/write
LV Creation host, time proxmox, 2021-10-29 17:53:51 +0200
LV Status available
# open 2
LV Size 5.00 GiB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID xXgGYT-5tZU-POaf-RO5J-ELHN-zeZM-OTIl0G
LV Write Access read/write
LV Creation host, time proxmox, 2021-10-29 17:53:51 +0200
LV Status available
# open 1
LV Size 29.50 GiB
Current LE 7552
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID xVvrWy-nRt6-JinJ-IMzc-mEQ8-SRYg-eVm6vV
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2021-10-29 17:54:11 +0200
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size <67.49 GiB
Allocated pool data 100.00%
Allocated metadata 4.52%
Current LE 17277
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-1
LV Name vm-100-disk-1
VG Name pve
LV UUID r3O8h2-ETlX-Vdza-FSeO-tCXE-uStu-OTY5Jh
LV Write Access read/write
LV Creation host, time pve, 2021-10-29 18:27:20 +0200
LV Pool name data
LV Status available
# open 1
LV Size 4.00 MiB
Mapped size 3.12%
Current LE 1
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-2
LV Name vm-100-disk-2
VG Name pve
LV UUID gNEsMW-9Erb-0VfH-uQeO-Kd3d-pZBR-oYpzCm
LV Write Access read/write
LV Creation host, time pve, 2021-10-29 18:39:32 +0200
LV Pool name data
LV Status available
# open 1
LV Size 111.00 GiB
Mapped size 60.80%
Current LE 28416
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
 
Last edited:
Solved:

First I extended the diskspace on local-lvm with 5 gb, so that home assistant was able to start with:

lvextend -L+5G pve/data

Then I enabled "discard" and "ssd" on my vms disk.

After this I ran the following:

fstrim -va
systemctl enable fstrim.timer
systemctl start fstrim.timer

Hope this helps somebody else.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!