I've allowed local-lvm to become 100% full and now have an io-error on one of my VMs. How do I fix this?

J1D2A3

New Member
May 31, 2022
2
0
1
Hi all,

I'll start this off by saying that this may not be as coherent as I would like it to be - I'm currently abroad and have gone down with covid which isn't helping my problem-solving ability. Thankfully I run Wireguard so have connectivity back to my PVE installation. I'm a relatively new user of PVE and my inexperience has led me to a significant issue - I'm sure that you'll see amny things that I am doing wrong or at the very least inefficiently. I have a VM running HomeAssistant which currently has a status of io-error and one of my containers won't start which appears to be due to the local-lvm storage being maxed out. I believe that I may have used default settings during installation and am not currently using the full space on the SSD. My hardware is as follows:

HP EliteDesk 800 G2 Mini
8 core i7-6700T
16GB RAM
128GB SSD
256GB NVME drive

I have spent a day or so googling for the answer but I can't seem to work it out. Various posts have asked for the outputs of vgdisplay, lvdisplay, lsblk, lvs, vgs, pvs so I have pasted them below - as a new user, I don't really know what I'm looking at. I've also attached some screenshots from PVE. Would someone be able to ELI5 what I need to do to get out of this mess? If you need more information then let me know and I will provide it. I think I have unallocated space on the SSD as the local storage is 31GB and the local-lvm is 69GB - can I access / allocate the extra ~28GB?

Thank you in advance and hopefully, I've been able to put this together in a coherent manner!

vgdisplay
root@hestia:~# vgdisplay --- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 105 VG Access read/write VG Status resizable MAX LV 0 Cur LV 8 Open LV 6 Max PV 0 Cur PV 1 Act PV 1 VG Size <118.74 GiB PE Size 4.00 MiB Total PE 30397 Alloc PE / Size 26621 / <103.99 GiB Free PE / Size 3776 / 14.75 GiB VG UUID Or1yHp-LhY9-lXqn-UnFr-ey2z-27lJ-3LdIyK --- Volume group --- VG Name local-nvme System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 8 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 232.88 GiB PE Size 4.00 MiB Total PE 59618 Alloc PE / Size 8192 / 32.00 GiB Free PE / Size 51426 / 200.88 GiB VG UUID sAHxd5-8nap-JUc2-HbWT-VikT-oYzO-hqdjQ8

lvdisplay
root@hestia:~# lvdisplay --- Logical volume --- LV Path /dev/pve/swap LV Name swap VG Name pve LV UUID WcH39j-t9hB-GzPE-oXpA-Zuf2-MdOE-YqYmm0 LV Write Access read/write LV Creation host, time proxmox, 2022-02-04 17:38:05 +0000 LV Status available # open 2 LV Size 8.00 GiB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/pve/root LV Name root VG Name pve LV UUID gTauCp-i41h-CEuB-UCRF-3Tgd-vb49-85yxMc LV Write Access read/write LV Creation host, time proxmox, 2022-02-04 17:38:05 +0000 LV Status available # open 1 LV Size 29.50 GiB Current LE 7552 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name data VG Name pve LV UUID PQh1NE-Z4cL-opRb-lUmF-gwgf-SN5E-eaMTgh LV Write Access read/write (activated read only) LV Creation host, time proxmox, 2022-02-04 17:38:09 +0000 LV Pool metadata data_tmeta LV Pool data data_tdata LV Status available # open 0 LV Size <64.49 GiB Allocated pool data 100.00% Allocated metadata 4.60% Current LE 16509 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:6 --- Logical volume --- LV Path /dev/pve/vm-101-disk-0 LV Name vm-101-disk-0 VG Name pve LV UUID fpydMC-codS-F0OR-LRZT-VULg-jL20-GTSZ9t LV Write Access read/write LV Creation host, time hestia, 2022-02-06 09:26:47 +0000 LV Pool name data LV Status available # open 1 LV Size 4.00 GiB Mapped size 96.44% Current LE 1024 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:7 --- Logical volume --- LV Path /dev/pve/vm-102-disk-0 LV Name vm-102-disk-0 VG Name pve LV UUID JU0R9d-T58t-31jk-pOKT-b3Cq-FWUZ-cdQzu7 LV Write Access read/write LV Creation host, time hestia, 2022-02-06 11:57:21 +0000 LV Pool name data LV Status available # open 1 LV Size 2.00 GiB Mapped size 99.28% Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:8 --- Logical volume --- LV Path /dev/pve/vm-103-disk-0 LV Name vm-103-disk-0 VG Name pve LV UUID 10rKOG-TFn2-DuPu-qISl-Tijk-HoBW-sch3mi LV Write Access read/write LV Creation host, time hestia, 2022-02-06 14:15:47 +0000 LV Pool name data LV Status available # open 0 LV Size 64.00 GiB Mapped size 46.86% Current LE 16384 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:9 --- Logical volume --- LV Path /dev/pve/vm-104-disk-0 LV Name vm-104-disk-0 VG Name pve LV UUID J74WX9-7XfE-ZUBb-TYrz-8MTm-1Y4z-HWv6AL LV Write Access read/write LV Creation host, time hestia, 2022-02-07 09:04:01 +0000 LV Pool name data LV Status available # open 1 LV Size 4.00 MiB Mapped size 0.00% Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:10 --- Logical volume --- LV Path /dev/pve/vm-104-disk-1 LV Name vm-104-disk-1 VG Name pve LV UUID 9Uqaoj-V9H4-k6SL-OPEL-L8on-BiMv-ypriWR LV Write Access read/write LV Creation host, time hestia, 2022-02-07 09:04:02 +0000 LV Pool name data LV Status available # open 1 LV Size 32.00 GiB Mapped size 89.54% Current LE 8192 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:11 --- Logical volume --- LV Path /dev/local-nvme/vm-104-disk-0 LV Name vm-104-disk-0 VG Name local-nvme LV UUID NcKCBB-7gCP-D3Yc-o2V3-yxcN-ChWc-h6ywju LV Write Access read/write LV Creation host, time hestia, 2022-02-09 10:19:30 +0000 LV Status available # open 1 LV Size 32.00 GiB Current LE 8192 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

lsblk
root@hestia:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 119.2G 0 disk ├─sda1 8:1 0 1007K 0 part ├─sda2 8:2 0 512M 0 part /boot/efi └─sda3 8:3 0 118.7G 0 part ├─pve-swap 253:1 0 8G 0 lvm [SWAP] ├─pve-root 253:2 0 29.5G 0 lvm / ├─pve-data_tmeta 253:3 0 1G 0 lvm │ └─pve-data-tpool 253:5 0 64.5G 0 lvm │ ├─pve-data 253:6 0 64.5G 1 lvm │ ├─pve-vm--101--disk--0 253:7 0 4G 0 lvm cdQzu7 │ ├─pve-vm--102--disk--0 253:8 0 2G 0 lvm │ ├─pve-vm--103--disk--0 253:9 0 64G 0 lvm 0 │ ├─pve-vm--104--disk--0 253:10 0 4M 0 lvm │ └─pve-vm--104--disk--1 253:11 0 32G 0 lvm └─pve-data_tdata 253:4 0 64.5G 0 lvm └─pve-data-tpool 253:5 0 64.5G 0 lvm ├─pve-data 253:6 0 64.5G 1 lvm ├─pve-vm--101--disk--0 253:7 0 4G 0 lvm ├─pve-vm--102--disk--0 253:8 0 2G 0 lvm ├─pve-vm--103--disk--0 253:9 0 64G 0 lvm ├─pve-vm--104--disk--0 253:10 0 4M 0 lvm └─pve-vm--104--disk--1 253:11 0 32G 0 lvm nvme0n1 259:0 0 232.9G 0 disk └─local--nvme-vm--104--disk--0 253:0 0 32G 0 lvm

lvs
root@hestia:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert vm-104-disk-0 local-nvme -wi-ao---- 32.00g data pve twi-aotzD- <64.49g 100.00 4.60 root pve -wi-ao---- 29.50g swap pve -wi-ao---- 8.00g vm-101-disk-0 pve Vwi-aotz-- 4.00g data 96.44 vm-102-disk-0 pve Vwi-aotz-- 2.00g data 99.28 vm-103-disk-0 pve Vwi-a-tz-- 64.00g data 46.86 vm-104-disk-0 pve Vwi-aotz-- 4.00m data 0.00 vm-104-disk-1 pve Vwi-aotz-- 32.00g data 89.54

vgs
root@hestia:~# vgs VG #PV #LV #SN Attr VSize VFree local-nvme 1 1 0 wz--n- 232.88g 200.88g pve 1 8 0 wz--n- <118.74g 14.75g

pvs
root@hestia:~# pvs PV VG Fmt Attr PSize PFree /dev/nvme0n1 local-nvme lvm2 a-- 232.88g 200.88g /dev/sda3 pve lvm2 a-- <118.74g 14.75g
 

Attachments

  • PVE local-lvm CT Volumes.png
    PVE local-lvm CT Volumes.png
    108.9 KB · Views: 23
  • PVE local-lvm Summary.png
    PVE local-lvm Summary.png
    186.2 KB · Views: 22
  • PVE local-lvm VM Disks.png
    PVE local-lvm VM Disks.png
    101.1 KB · Views: 21
  • PVE-Search.png
    PVE-Search.png
    182.1 KB · Views: 18
  • PVE-Summary.png
    PVE-Summary.png
    295.3 KB · Views: 21
You are using thin provisioning and nothing will prevent your guests from writing your LVM thin pool full. If that happens your pool will become inoperatable and your guests data might corrupt. You always have to monitor your pools free capacity yourself and make sure this never happens.
To free up space you could:
1.) Delete snapshots
2.) Run fstrim -a inside a VM or pct trim YourLXCsVMID on your host
3.) Delete unneeded files like logs in your guests

But might be possible that all the above won't work anymore because nothing can be deleted/freed up anymore because the pool is full and can't write a sinlge byte so the pool is basically read-only.
In that case you could try to increase your pools size by using vgextend and lvextend in case there is unallocated space. If there is no unallocated space you would need to buy a bigger disk, clone the disk over to the bigger disk (for example using the dd command) and them extend it on the new bigger disk.
 
Last edited:
Thank you for taking the time to respond - I think I understand the majority of what you have said! Are there any alternatives that I should be using instead of thin provisioning?

Since posting I have continued to play around with it. I managed to force stop the VM through PowerShell as it wouldn't stop in the PVE UI and was then able to move the VM storage over to my empty NVME drive. Thankfully, the non-functioning VM and container both started fine once this was done. I know now that I need to keep a closer eye on the state of my storage to ensure that this doesn't happen again.

I'll look into vgextend and lvextend to see whether I can access the unallocated 28GB on the SSD.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!