I have the same issue. I did an `apt-get update; apt-get upgrade` on 5.1 and I get this exact issue. All VMs and CTs can no longer find their virtual drives.
lvscan
ACTIVE '/dev/pve/swap' [7.00 GiB] inherit
ACTIVE '/dev/pve/root' [37.00 GiB] inherit
inactive...
My theory is the hard drives UUID all changed after a BIOS reset. Maybe even their order in the BIOS. So I think even though ProxMox has found all of the drives and partitions it may not know where things belong. Just my guess.
lsblk -f
NAME FSTYPE LABEL UUID...
I was able to make each disk inactive with:
lvchange -a n pve/data
lvchange -a n pve/vm-100-disk-1
lvchange -a n pve/vm-101-disk-1
lvchange -a n pve/vm-102-disk-1
lvchange -a n pve/vm-102-disk-2
lvchange -a n pve/vm-103-disk-1
lvchange -a n pve/vm-103-disk-2
lvchange -a n pve/vm-104-disk-1...
After a bios reset I'm not able to use any of the virtual machine drives from the thin pool. I've tried following instructions for repair:
lvconvert --repair pve/data
But I get the error
Only inactive pool can be repaired.
I need to figure out how to make all of them inactive to do the...
lvconvert --repair pve/data
doesn't work:
WARNING: Not using lvmetad because config setting use_lvmetad=0.
WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).
Using default stripesize 64.00 KiB.
Only inactive pool can be repaired.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.