io-error - local.lvm full

f.somenzi

Member
Mar 15, 2021
20
1
8
41
Hello,
I have a big problem on a VM installed by a customer. It boots but when we make some operations that need to use some disk space, it crashes and goes to "io-error".
I noticed that local-lvm is full but I can't understand why it is so small.
The physical machine has a 250 GB hard disk and the local-lvm is 152GB, and it's full. I don't remember I dedicated this space only. How can I add space?

local-lvm.JPG


content.JPG

hardware.JPG
 
Based on the output you have only allocated 152g for data pool, with the rest being used for root partition of the hypervisor. Subsequently you have overprovisioned the VM. The situation you are in is not great. Most likely there is some data corruption/loss already. You should create new pool asap, perhaps using sdb and move the VM to it.

P.S. FWIW it wasnt that you picked small pool size, you most likely used default layout and didnt realize that you are sharing the physical disk between hypervisor and VMs.

Here is a link that should guide you through creating new pool on sdb: https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)

Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Is it a good idea if I delete some data from the guest VM, does this make me regain some space?
 
It might, but you may need to trim the VM: https://gist.github.com/hostberg/86bfaa81e50cc0666f1745e1897c0a56

I would suggest you dont start the VM and migrate it. Its overprovisioned by a lot and in many cases you cant shrink the filesystems safely.
Another option is to extend the VG with sdb, then extend the LV, but if sdb is not equivalent in performance it wont be great.

https://www.redhat.com/sysadmin/resize-lvm-simple


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
From your screenshots there is an sdb with a partition on it. It does not appear to be mounted. You can try "fdisk -l" to see if there is a filesystem on it. If there is, you will need to figure out if it contains data that you need. If it does, you might be able to mount it and create a "directory" type storage on /mnt/sdb/proxmox and then migrate the VM out of LVM to qcow.
There are many variables here as you can see.

Other useful commands:
lsscsi
nvme list


Good luck


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
From your screenshots there is an sdb with a partition on it. It does not appear to be mounted. You can try "fdisk -l" to see if there is a filesystem on it. If there is, you will need to figure out if it contains data that you need. If it does, you might be able to mount it and create a "directory" type storage on /mnt/sdb/proxmox and then migrate the VM out of LVM to qcow.
There are many variables here as you can see.

Other useful commands:
lsscsi
nvme list


Good luck


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

It is an external USB device. No way...

Based on the output you have only allocated 152g for data pool, with the rest being used for root partition of the hypervisor. Subsequently you have overprovisioned the VM. The situation you are in is not great. Most likely there is some data corruption/loss already. You should create new pool asap, perhaps using sdb and move the VM to it.

P.S. FWIW it wasnt that you picked small pool size, you most likely used default layout and didnt realize that you are sharing the physical disk between hypervisor and VMs.

Here is a link that should guide you through creating new pool on sdb: https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)

Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Where can I find instructions about creating a new pools and migrate the virtual machine?
 
Do you think it could be another solution?

  • Install a new disk in the physical machine
  • install a new proxmox VE on the new disk, resizing local-lvm correctly
  • I previously configured a scheduled snapshot save on a NFS disk, I will take this saved file and restore on the new proxmox, after adding this NFS destination on the new hypervisor...
It is probably a bad idea...
 
https://pve.proxmox.com/wiki/Storage is the primary page,
https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM) is a subset of the above for specific storage type.

There are few ways to resolve your problem. I dont think installing another PVE is in the top 10. However, if you are limited to one disk in your server, I guess restoring from backup is one way to approach it. There are too many variables to hash those out real time in the forum. You may need to engage someone on a consultation basis if this is a production sev1 issue.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!