Need help fast LVM problems after powerloss

clano

New Member
May 13, 2022
1
0
1
Currently struggling to get the VMs datastore up and running after a powerloss last night.
The LVM part for the OS itself is ok, we are able to login via ssh and the proxmox WEBui shows and we can login.

However whenever we try to start a VM the following error is shown:

Code:

TASK ERROR: activating LV 'ssd_raid/ssd_raid' failed: Check of pool ssd_raid/ssd_raid failed (status:1). Manual repair required!

lvs -a shows:

Code:

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- 429.11g 0.00 0.40
[data_tdata] pve Twi-ao---- 429.11g
[data_tmeta] pve ewi-ao---- <4.38g
[lvol0_pmspare] pve ewi------- <4.38g
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
[lvol0_pmspare] ssd_raid ewi------- 15.81g
ssd_raid ssd_raid twi---tz-- <3.61t
[ssd_raid_tdata] ssd_raid Twi------- <3.61t
[ssd_raid_tmeta] ssd_raid ewi------- 15.81g
temphyperv ssd_raid Vri---tz-- 500.00g ssd_raid
vm-100-disk-1 ssd_raid Vri---tz-- 35.00g ssd_raid
vm-101-disk-0 ssd_raid Vri---tz-- 105.00g ssd_raid
vm-101-disk-1 ssd_raid Vri---tz-- 100.00g ssd_raid
vm-102-disk-0 ssd_raid Vri---tz-- 32.00g ssd_raid
vm-103-disk-0 ssd_raid Vri---tz-- 240.00g ssd_raid
vm-104-disk-0 ssd_raid Vri---tz-- 127.00g ssd_raid
vm-104-disk-1 ssd_raid Vri---tz-- 64.00g ssd_raid
vm-105-disk-0 ssd_raid Vri---tz-- 127.00g ssd_raid
vm-106-disk-0 ssd_raid Vri---tz-- 500.00g ssd_raid
vm-107-disk-0 ssd_raid Vri---tz-- 500.00g ssd_raid
vm-108-disk-0 ssd_raid Vri---tz-- 60.00g ssd_raid
vm-109-disk-0 ssd_raid Vri---tz-- 120.00g ssd_raid
vm-110-disk-0 ssd_raid Vri---tz-- 350.00g ssd_raid
vm-111-disk-0 ssd_raid Vri---tz-- 60.00g ssd_raid
vm-112-disk-0 ssd_raid Vri---tz-- 120.00g ssd_raid

lvconvert --repair ssd_raid/ssd_raid gives:

Code:

Child 38525 exited abnormally
Repair of thin metadata volume of thin pool ssd_raid/ssd_raid failed (status:-1). Manual repair required!

vgchange -ay gives:

Code:

Check of pool ssd_raid/ssd_raid failed (status:1). Manual repair required!
0 logical volume(s) in volume group "ssd_raid" now active
3 logical volume(s) in volume group "pve" now active

lvdisplay lists all volumes with the same status, typically like this:

Code:

--- Logical volume ---
LV Path /dev/ssd_raid/vm-101-disk-1
LV Name vm-101-disk-1
VG Name ssd_raid
LV UUID synMKT-aRmy-IWKF-Shr5-vwzj-47Fx-Dqvl19
LV Write Access read only
LV Creation host, time pve01, 2021-04-13 10:55:59 +0200
LV Pool name ssd_raid
LV Status NOT available
LV Size 100.00 GiB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors auto

vgdisplay shows:

Code:

--- Volume group ---
VG Name ssd_raid
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 97
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 17
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <3.64 TiB
PE Size 4.00 MiB
Total PE 953727
Alloc PE / Size 953599 / <3.64 TiB
Free PE / Size 128 / 512.00 MiB
VG UUID xXuRvB-TWv8-90OQ-OP8U-xXlF-ZvWb-PPemzI

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 557.87 GiB
PE Size 4.00 MiB
Total PE 142815
Alloc PE / Size 138719 / 541.87 GiB
Free PE / Size 4096 / 16.00 GiB
VG UUID g19KIj-A1C7-52LW-RLJy-aO7l-tqiR-0hQLmI
 
Maybe this can help you? I have no idea what manual LVM repair , sorry. It might be easier to restore the VMs from backups. I assume you have backups of all important VMs?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!