I cant Repair lvm-thin volume after power failure

Donovan Hoare

Active Member
Nov 16, 2017
28
6
43
43
Hi All.

I have lost access to my thin-lvm and I can't get it repaired.
It happened after a power failure.

I have an HPPerc Raid controller connected to a Dell MD100 powervault
So my lvm for OS is working and my internal 40TB lvm raid is working.
It's just the one connected to the other raid controller.
Please find the below outputs.
Any help would be appreciated.

lvchange -a y md1-30tb/md1-30tb
Code:
lvchange -a y md1-30tb/md1-30tb
  Check of pool md1-30tb/md1-30tb failed (status:1). Manual repair required!

lvs -a -o +devices
Code:
lvs -a -o +devices
  LV                    VG            Attr       LSize    Pool          Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices               
  internal-40tb         internal-40tb twi-aotz--   36.35t                      1.42   0.57                             internal-40tb_tdata(0)
  [internal-40tb_tdata] internal-40tb Twi-ao----   36.35t                                                              /dev/sdb(4048)       
  [internal-40tb_tmeta] internal-40tb ewi-ao----   15.81g                                                              /dev/sdb(9533103)     
  [lvol0_pmspare]       internal-40tb ewi-------   15.81g                                                              /dev/sdb(0)           
  vm-109-disk-0         internal-40tb Vwi-aotz--  100.00g internal-40tb        53.62                                                         
  vm-117-disk-0         internal-40tb Vwi-aotz--  500.00g internal-40tb        29.15                                                         
  vm-118-disk-0         internal-40tb Vwi-aotz--   20.00g internal-40tb        0.97                                                         
  vm-123-disk-0         internal-40tb Vwi-aotz--   20.00g internal-40tb        0.72                                                         
  vm-124-disk-0         internal-40tb Vwi-aotz--   20.00g internal-40tb        0.97                                                         
  vm-127-disk-0         internal-40tb Vwi-aotz--   32.00g internal-40tb        20.08                                                         
  vm-127-disk-1         internal-40tb Vwi-aotz--  500.00g internal-40tb        1.00                                                         
  vm-127-disk-2         internal-40tb Vwi-aotz-- 1000.00g internal-40tb        0.01                                                         
  vm-127-disk-3         internal-40tb Vwi-aotz-- 1000.00g internal-40tb        0.01                                                         
  vm-128-disk-0         internal-40tb Vwi-aotz--   40.00g internal-40tb        50.71                                                         
  vm-133-disk-0         internal-40tb Vwi-aotz--    5.37t internal-40tb        3.49                                                         
  vm-139-disk-0         internal-40tb Vwi-aotz--  160.00g internal-40tb        44.75                                                         
  vm-141-disk-0         internal-40tb Vwi-aotz--  350.00g internal-40tb        9.33                                                         
  [lvol0_pmspare]       md1-30tb      ewi-------   15.81g                                                              /dev/sdc(0)           
  md1-30tb              md1-30tb      twi---tz--   27.25t                                                              md1-30tb_tdata(0)     
  [md1-30tb_tdata]      md1-30tb      Twi-------   27.25t                                                              /dev/sdc(4048)       
  [md1-30tb_tmeta]      md1-30tb      ewi-------   15.81g                                                              /dev/sdc(7148463)     
  vm-116-disk-0         md1-30tb      Vwi---tz--   <9.77t md1-30tb                                                                           
  vm-118-disk-0         md1-30tb      Vwi---tz--   20.00g md1-30tb                                                                           
  vm-119-disk-0         md1-30tb      Vwi---tz--  100.00g md1-30tb                                                                           
  vm-121-disk-0         md1-30tb      Vwi---tz--  350.00g md1-30tb                                                                           
  vm-124-disk-0         md1-30tb      Vwi---tz--   20.00g md1-30tb                                                                           
  vm-125-disk-0         md1-30tb      Vwi---tz--   40.00g md1-30tb                                                                           
  vm-128-disk-0         md1-30tb      Vwi---tz--   32.00g md1-30tb                                                                           
  vm-128-disk-1         md1-30tb      Vwi---tz--  960.00g md1-30tb                                                                           
  vm-132-disk-0         md1-30tb      Vwi---tz--    4.88t md1-30tb                                                                           
  data                  pve           twi-a-tz--  429.11g                      0.00   0.40                             data_tdata(0)         
  [data_tdata]          pve           Twi-ao----  429.11g                                                              /dev/sda3(26624)     
  [data_tmeta]          pve           ewi-ao----   <4.38g                                                              /dev/sda3(136477)     
  [lvol0_pmspare]       pve           ewi-------   <4.38g                                                              /dev/sda3(137598)     
  root                  pve           -wi-ao----   96.00g                                                              /dev/sda3(2048)       
  swap                  pve           -wi-ao----    8.00g                                                              /dev/sda3(0)


lvconvert -v --repair md1-30tb/md1-30tb
As you can see it just fails and doesnt give me any decent output to try figure out why.
Code:
lvconvert -v --repair md1-30tb/md1-30tb
  activation/volume_list configuration setting not defined: Checking only host tags for md1-30tb/lvol0_pmspare.
  Creating md1--30tb-lvol0_pmspare
  Loading table for md1--30tb-lvol0_pmspare (253:17).
  Resuming md1--30tb-lvol0_pmspare (253:17).
  activation/volume_list configuration setting not defined: Checking only host tags for md1-30tb/md1-30tb_tmeta.
  Creating md1--30tb-md1--30tb_tmeta
  Loading table for md1--30tb-md1--30tb_tmeta (253:18).
  Resuming md1--30tb-md1--30tb_tmeta (253:18).
  Executing: /usr/sbin/thin_repair  -i /dev/mapper/md1--30tb-md1--30tb_tmeta -o /dev/mapper/md1--30tb-lvol0_pmspare
  Child 97902 exited abnormally
  Repair of thin metadata volume of thin pool md1-30tb/md1-30tb failed (status:-1). Manual repair required!
  Removing md1--30tb-md1--30tb_tmeta (253:18)
  Removing md1--30tb-lvol0_pmspare (253:17)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!