Hi folks,
I'm a little stuck with this one. I have 4 guest VM's running on PVE 6.3-3.
VM100 - Win10
VM101 - Ubuntu 20.04
VM102 - Ubuntu 20.04
VM103 - Ubuntu 20.04
Last night VM102 crashed so I initiated a shut down. After a while it wouldn't respond so I force stopped it from PVE.
Upon reboot I was able to boot the VM, but the filesystem had entered read-only mode. I did some basic troubleshooting on the host (attempted to run fsck) and was unable to get it running. Eventually I restarted the host.
After restart, VM100 (Win10) still boots fine but all of the Ubuntu VM's boot to a read-only file system. All VM's share the same local-lvm storage.
I've tried to create a new VM from scratch, which fails to finish booting. Restoring a VM from backup will boot, but ends up in read-only mode as per the original VM's.
Things I've checked
I'm at a loss as to what to do here, short of starting from scratch. Any advice out there?
I'm a little stuck with this one. I have 4 guest VM's running on PVE 6.3-3.
VM100 - Win10
VM101 - Ubuntu 20.04
VM102 - Ubuntu 20.04
VM103 - Ubuntu 20.04
Last night VM102 crashed so I initiated a shut down. After a while it wouldn't respond so I force stopped it from PVE.
Upon reboot I was able to boot the VM, but the filesystem had entered read-only mode. I did some basic troubleshooting on the host (attempted to run fsck) and was unable to get it running. Eventually I restarted the host.
After restart, VM100 (Win10) still boots fine but all of the Ubuntu VM's boot to a read-only file system. All VM's share the same local-lvm storage.
I've tried to create a new VM from scratch, which fails to finish booting. Restoring a VM from backup will boot, but ends up in read-only mode as per the original VM's.
Things I've checked
- Metadata usage shows 2%
- lvdisplay shows all volumes available
- pvscan showed 15.99GB free
- vgcfgrestore pve --test failed
Code:
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 37M 1.6G 3% /run
/dev/mapper/pve-root 114G 15G 94G 14% /
tmpfs 7.7G 43M 7.7G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/nvme0n1p2 511M 312K 511M 1% /boot/efi
/dev/sdd1 220G 194G 15G 94% /media/backupdisk
/dev/fuse 30M 20K 30M 1% /etc/pve
tmpfs 1.6G 0 1.6G 0% /run/user/0
Code:
root@pve:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <318.51g 35.71 2.13
[data_tdata] pve Twi-ao---- <318.51g
[data_tmeta] pve ewi-ao---- 3.25g
[lvol0_pmspare] pve ewi------- 3.25g
root pve -wi-ao---- 116.25g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-a-tz-- 32.00g data 58.01
vm-101-disk-0 pve Vwi-a-tz-- 64.00g data 62.21
vm-102-disk-0 pve Vwi-a-tz-- 64.00g data 72.00
vm-103-disk-0 pve Vwi-a-tz-- 32.00g data 28.99
I'm at a loss as to what to do here, short of starting from scratch. Any advice out there?
Last edited: