My ProxMox server was running fine off a five disk hardware RAID5, (LSI/AVAGO MegaRAID). One drive was throwing SMART errors, so I shut the VMs and Servers down, and replaced the drive. After the drive was rebuilt, and the server booted back up, PVE will no longer bring the LVM partition back online.
Partition is /dev/LocalVMStorage
'lvscan' shows the partition as inactive
'vgscan' shows the volume group
'pvscan' shows the physical volumes. (/dev/sda is the logical drive of the RAID controller)
Attempting a 'lvconvert -v --repair LocalVMStorage/LocalVMStorage' comes back with the error:
I really don't know where to go from here. I can't seem to get a 'fsck' or 'e2fsck' to work against /dev/sda, /dev/sda/LocalVMStorage. Any help anyone can give me is much appreciated.
Partition is /dev/LocalVMStorage
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 36.4T 0 disk
├─LocalVMStorage-LocalVMStorage_tmeta 252:4 0 15.9G 0 lvm
└─LocalVMStorage-LocalVMStorage_tdata 252:5 0 36.3T 0 lvm
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 1G 0 part
└─nvme0n1p3 259:3 0 464.8G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.4G 0 lvm
│ └─pve-data 252:6 0 337.9G 0 lvm
└─pve-data_tdata 252:3 0 337.9G 0 lvm
└─pve-data 252:6 0 337.9G 0 lvm
'lvscan' shows the partition as inactive
ACTIVE '/dev/pve/data' [337.86 GiB] inherit
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
inactive '/dev/LocalVMStorage/LocalVMStorage' [<36.35 TiB] inherit
inactive '/dev/LocalVMStorage/vm-100-disk-0' [4.00 MiB] inherit
inactive '/dev/LocalVMStorage/vm-100-disk-1' [400.00 GiB] inherit
inactive '/dev/LocalVMStorage/vm-100-disk-2' [30.00 TiB] inherit
inactive '/dev/LocalVMStorage/vm-101-disk-0' [4.00 MiB] inherit
inactive '/dev/LocalVMStorage/vm-101-disk-1' [250.00 GiB] inherit
'vgscan' shows the volume group
Found volume group "pve" using metadata type lvm2
Found volume group "LocalVMStorage" using metadata type lvm2
'pvscan' shows the physical volumes. (/dev/sda is the logical drive of the RAID controller)
PV /dev/nvme0n1p3 VG pve lvm2 [<464.76 GiB / 16.00 GiB free]
PV /dev/sda VG LocalVMStorage lvm2 [<36.38 TiB / 376.00 MiB free]
Attempting a 'lvconvert -v --repair LocalVMStorage/LocalVMStorage' comes back with the error:
root@temp:~# lvconvert -v --repair LocalVMStorage/LocalVMStorage
activation/volume_list configuration setting not defined: Checking only host tags for LocalVMStorage/lvol0_pmspare.
Creating LocalVMStorage-lvol0_pmspare
Loading table for LocalVMStorage-lvol0_pmspare (252:4).
Resuming LocalVMStorage-lvol0_pmspare (252:4).
activation/volume_list configuration setting not defined: Checking only host tags for LocalVMStorage/LocalVMStorage_tmeta.
Creating LocalVMStorage-LocalVMStorage_tmeta
Loading table for LocalVMStorage-LocalVMStorage_tmeta (252:5).
Resuming LocalVMStorage-LocalVMStorage_tmeta (252:5).
Executing: /usr/sbin/thin_repair -i /dev/LocalVMStorage/LocalVMStorage_tmeta -o /dev/LocalVMStorage/lvol0_pmspare
no compatible roots found
/usr/sbin/thin_repair failed: 64
Repair of thin metadata volume of thin pool LocalVMStorage/LocalVMStorage failed (status:64). Manual repair required!
Removing LocalVMStorage-LocalVMStorage_tmeta (252:5)
Removing LocalVMStorage-lvol0_pmspare (252:4)
I really don't know where to go from here. I can't seem to get a 'fsck' or 'e2fsck' to work against /dev/sda, /dev/sda/LocalVMStorage. Any help anyone can give me is much appreciated.