Can´t activate an inactive LV

Jorge Molas

New Member
Mar 29, 2018
2
0
1
43
Hi!, yesterday my proxmox collapsed, could not manage the virtual machines so I had to restart.
after the restart, the volume group and the lv disks were inactive. So we should try with:

# lvchange -a and pve / data

He informed us
"Check of pool pvc / data failed (status: 1) Manual repair required!"

Then we made the repair with:

#lvconvert --repair pve / data

The discs were recovered but I miss one

#lvscan
ACTIVE '/ dev / pve / swap' [8.00 GiB] inherit
ACTIVE '/ dev / pve / root' [96.00 GiB] inherit
ACTIVE '/ dev / pve / data' [811.21 GiB] inherit
ACTIVE '/ dev / pve / vm-101-disk-1' [60.00 GiB] inherit
ACTIVE '/ dev / pve / vm-102-disk-1' [50.00 GiB] inherit
ACTIVE '/ dev / pve / vm-103-disk-1' [50.00 GiB] inherit
ACTIVE '/ dev / pve / vm-103-disk-2' [60.00 GiB] inherit
inactive '/ dev / pve / vm-103-disk-4' [566.00 GiB] inherit

#lvdisplay / dev / pve / vm-103-disk-4
--- Logical volume ---
LV Path / dev / pve / vm-103-disk-4
LV Name vm-103-disk-4
VG Name pve
LV UUID QoZbv3-R8Mj-paT3-Pf9M-23QX-2Ail-yzT5iv
LV Write Access read / write
LV Creation host, time node1, 2018-06-08 10:55:39 -0300
LV Pool name data
LV Status NOT available
LV Size 566.00 GiB
Current LE 144896
Segments 1
Allocation inherit
Read ahead auto sectors

When I want to activate the lv separately it informs me:

#lvchange -a and / dev / pve / vm-103-disk-4
device-mapper: reload ioctl on (253: 10) failed: No data available


Regards!
 
Hi,

I guess you metadata pool is 100% used, so your thin-lvm is no more working?

check with "lvs" the meta% usage of the lvm.
 
Hi@wolfgang thanks for answering. Check the pve/data metadata group but I did not see anything strange
Copy result of lvs -v

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 811.21g 19.58 1.91
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-101-disk-1 pve Vwi-aotz-- 60.00g data 100.00
vm-102-disk-1 pve Vwi-aotz-- 50.00g data 55.34
vm-103-disk-1 pve Vwi-aotz-- 50.00g data 32.29
vm-103-disk-2 pve Vwi-aotz-- 60.00g data 91.64
vm-103-disk-4 pve Vwi---tz-- 566.00g data

Regards

Add some information about block device 253:10 it shows the output of dmesg
device-mapper: table: 253:10: thin: Couldn't open thin internal device

 
Last edited:
No, your have luckily not metadata overfill.
Then you metapool simply get corrupt and not all data could be restored.