Local lvm is full and the vms are turning on

Fusion21

New Member
Apr 3, 2024
12
0
1
Hello to anyone reading so the tittle pretty much describes everything i have a lxc container that is on local lvm that lxc runs all my docker containers so basically my whole homelab and local lvm is full and it caused the container to crash and now it doesnt start again i tried to back it up it failed i tried moving the virtual disk to an other disk but i kept getting errors i am not sure what to do can anyone help me with this i cant lose this lxc its very important to my whole homelab also another note the local disk which proxmox automatically creates is barely used i think 15% of it is used
 
Last edited:
Please share
Bash:
lsblk -o+FSTYPE
vgs
lvs
pct fstrim CTIDHERE might already help here.
 
Last edited:
Please share
Bash:
lsblk -o+FSTYPE
vgs
lvs
pct fstrim CTIDHERE might already help here.
root@HomeServer:~# lsblk -o+FSTYPE
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
loop2 7:2 0 2G 0 loop ext4
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi vfat
└─sda3 8:3 0 464.8G 0 part LVM2_member
├─pve-swap 252:0 0 7.6G 0 lvm [SWAP] swap
├─pve-root 252:1 0 96G 0 lvm / ext4
├─pve-data_tmeta 252:2 0 3.4G 0 lvm
│ └─pve-data-tpool 252:4 0 338.2G 0 lvm
│ ├─pve-data 252:5 0 338.2G 1 lvm
│ └─pve-vm--100--disk--0 252:6 0 425G 0 lvm ext4
└─pve-data_tdata 252:3 0 338.2G 0 lvm
└─pve-data-tpool 252:4 0 338.2G 0 lvm
├─pve-data 252:5 0 338.2G 1 lvm
└─pve-vm--100--disk--0 252:6 0 425G 0 lvm ext4
sdb 8:16 0 596.2G 0 disk
└─sdb1 8:17 0 596.2G 0 part /mnt/pve/Second-disk ext4
sr0 11:0 1 1024M 0 rom
root@HomeServer:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- <464.76g 16.00g
root@HomeServer:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotzD- <338.22g 100.00 3.74
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- <7.64g
vm-100-disk-0 pve Vwi-a-tz-- 425.00g data 79.58
root@HomeServer:~#
 
Please use code blocks so the formatting is preserved and this is readable.
 
Code:
root@HomeServer:~# lsblk -o+FSTYPE
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS          FSTYPE
loop2                          7:2    0     2G  0 loop                      ext4
sda                            8:0    0 465.8G  0 disk                    
├─sda1                         8:1    0  1007K  0 part                    
├─sda2                         8:2    0     1G  0 part /boot/efi            vfat
└─sda3                         8:3    0 464.8G  0 part                      LVM2_member
  ├─pve-swap                 252:0    0   7.6G  0 lvm  [SWAP]               swap
  ├─pve-root                 252:1    0    96G  0 lvm  /                    ext4
  ├─pve-data_tmeta           252:2    0   3.4G  0 lvm                      
  │ └─pve-data-tpool         252:4    0 338.2G  0 lvm                      
  │   ├─pve-data             252:5    0 338.2G  1 lvm                      
  │   └─pve-vm--100--disk--0 252:6    0   425G  0 lvm                       ext4
  └─pve-data_tdata           252:3    0 338.2G  0 lvm                      
    └─pve-data-tpool         252:4    0 338.2G  0 lvm                      
      ├─pve-data             252:5    0 338.2G  1 lvm                      
      └─pve-vm--100--disk--0 252:6    0   425G  0 lvm                       ext4
sdb                            8:16   0 596.2G  0 disk                    
└─sdb1                         8:17   0 596.2G  0 part /mnt/pve/Second-disk ext4
sr0                           11:0    1  1024M  0 rom





root@HomeServer:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   4   0 wz--n- <464.76g 16.00g
 
 
root@HomeServer:~# lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotzD- <338.22g             100.00 3.74                           
  root          pve -wi-ao----   96.00g                                                   
  swap          pve -wi-ao----   <7.64g                                                   
  vm-100-disk-0 pve Vwi-a-tz--  425.00g data        79.58                                 
root@HomeServer:~#

not sure if i did the code block correctly
 
That's fine. Thanks. The good thing is we can fix this by increasing data a little bit but I think you have snapshots you could delete first.
Snapshots will consume more space the longer they exist. Or rather the more things change since they were taken.
Please share
Bash:
qm listsnapshot 100
lvs -a
If you have some please delete them via the GUI, if not run this
Bash:
lvresize -r -L +1G /dev/pve/data
Afterwards make sure discard is enabled for the VM's disk(s), start the VM and then do a manual fstrim -a inside of it as explained here.
The goal is to bring Data% down.
 
Last edited:
Code:
root@HomeServer:~# qm listsnapshot 100
Configuration file 'nodes/HomeServer/qemu-server/100.conf' does not exist
root@HomeServer:~# lvs -a
  LV              VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotzD- <338.22g             100.00 3.74                           
  [data_tdata]    pve Twi-ao---- <338.22g                                                   
  [data_tmeta]    pve ewi-ao----   <3.45g                                                   
  [lvol0_pmspare] pve ewi-------   <3.45g                                                   
  root            pve -wi-ao----   96.00g                                                   
  swap            pve -wi-ao----   <7.64g                                                   
  vm-100-disk-0   pve Vwi-a-tz--  425.00g data        79.58                                 
root@HomeServer:~#
 
Ah, of course, "VM" 100 is a CT. I hate that both are named vm-. Use pct listsnapshot 100 for those.
I see the problem now. 100 is over-allocated and larger than the thin pool itself so 80%~ of its size is 100% of the pool's. I'd continue with the resize and trim in this case. If after resizing and pct fstrim 100 Data% does not go down significantly I'd go inside the CT and see what uses so much space with this
Code:
apt install gdu
gdu /
 
Last edited:
Ah, of course, "VM" 100 is a CT. I hate that both are named vm-. Use pct listsnapshot 100 for those.
I see the problem now. 100 is over-allocated and larger than the thin pool itself so 80%~ of its size is 100% of the pool's. I'd continue with the resize and trim in this case. If after resizing and pct fstrim 100 Data% does not go down significantly I'd go inside the CT and see what uses so much space with this
Code:
apt install gdu
gdu /
Code:
root@HomeServer:~# pct listsnapshot 100
`-> current                                             You are here!
root@HomeServer:~#


so you say i should continue with doing this command
lvresize -r -L +1G /dev/pve/data in the main shell? and if it doesnt work i should use gdu to check the container?

also note i think i mentioned that it was an lxc in the original post but if not sorry if i missed it

another note : its currently 3 am so dont expect a response for atleast a few hours more sorry
 
Yep you need to reduce the usage of the LV through various means. Resizing is just to get the pool and CT running again, it is not a permanent fix. Please don't quote whole messages.
 
Last edited:
Yep you need to reduce the usage of the LV through various means. Resizing is just to get the pool and CT running again, it is not a permanent fix. Please don't quote whole messages.
hey quick question is there a way to reduce the size of local and maybe use that in local lvm i mean i dont use local at all
 
Ah, of course, "VM" 100 is a CT. I hate that both are named vm-. Use pct listsnapshot 100 for those.
I see the problem now. 100 is over-allocated and larger than the thin pool itself so 80%~ of its size is 100% of the pool's. I'd continue with the resize and trim in this case. If after resizing and pct fstrim 100 Data% does not go down significantly I'd go inside the CT and see what uses so much space with this
Code:
apt install gdu
gdu /
also about this yes i know that the disk for that container is bigger than the storage i made that disk when i had no idea how to use proxmox and know i am paying for my mistakes
 
You can't easily shrink a live/mounted file system and it's also dangerous as you could shrink the LV too much by accident.
 
Last edited: