LVM and thin recovery problem

Vince-0

Active Member
Dec 15, 2017
5
0
41
38
I had a Dell Perc RAID controller die and looks like corrupted LVMs.
Proxmox booted after a long time of activation of LVM2 logical volumes job and errors for:

Failed to start activation of LVM2 logical volumes
systemctl status lvm2-activation-early.service
device-mapper: thin 253:5: metadata device (4145152 blocks) too small: expected 4161600
device-mapper: table 253:5: thin-pool: preresume failed, error = -22

So Prox booted without pve/data and cluster services etc but recovery of pve/data and VM/CT disks on there isn't working.
Two LVs are a problem: normal LVM: "RAID10/vm-188..." and LVM-thin: "pve/data" for data and VM/CT disks.

I booted live OS ubuntu server to try get to the data and copy off, booting took some time with error:
Timed out waiting for device dev-disk-by\x2duuid-...

I have a separate disk with VG "RAID10" with LV "vm-188-disk-0" that has backups on it that I'm trying to mount with error:

Code:
root@ubuntu-server:/mnt# mount /dev/RAID10/vm-188-disk-0 /mnt/LVM/
mount: /mnt/LVM: wrong fs type, bad option, bad superblock on /dev/mapper/RAID10-vm--188--disk--0, missing codepage or helper program, or other error.

Code:
root@ubuntu-server:/mnt# fdisk -l /dev/RAID10/vm-188-disk-0
Disk /dev/RAID10/vm-188-disk-0: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 1048576 bytes
Disklabel type: dos
Disk identifier: 0x0000615a

Device                      Boot   Start        End    Sectors  Size Id Type
/dev/RAID10/vm-188-disk-0p1 *       2048    2099199    2097152    1G 83 Linux
/dev/RAID10/vm-188-disk-0p2      2099200 2147483647 2145384448 1023G 8e Linux LVM

I tried kpartx and mount:
Code:
root@ubuntu-server:/dev/mapper# kpartx -av /dev/RAID10/vm-188-disk-0  
add map RAID10-vm--188--disk--0p1 (253:6): 0 2097152 linear 253:0 2048 
add map RAID10-vm--188--disk--0p2 (253:7): 0 2145384448 linear 253:0 2099200

p1 is the boot partion,
p2 is LVM2_member, can't mount like that:
Code:
root@ubuntu-server:/mnt# mount /dev/mapper/RAID10-vm--188--disk--0p2 /mnt/LVM/ 
mount: /mnt/LVM: unknown filesystem type 'LVM2_member'.

I tried to lvconvert --repair pve/data but that looks like a deep rabbit hole so now my priority is to get to the data in /dev/RAID10/vm-188-disk-0.

pvs:
Code:
root@ubuntu-server:/dev/pve# pvs 
  PV                                    VG     Fmt  Attr PSize     PFree  
  /dev/mapper/RAID10-vm--188--disk--0p2 centos lvm2 a--  <1023.00g  4.00m 
  /dev/sda3                             pve    lvm2 a--     <6.55t 16.00g 
  /dev/sdb                              RAID10 lvm2 a--     <6.55t <5.55t

lvs:
Code:
root@ubuntu-server:/dev/pve# lvs -a 
  LV             VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert 
  vm-188-disk-0  RAID10 -wi-ao----    1.00t                                                     
  home           centos -wi-a----- <969.12g                                                     
  root           centos -wi-a-----   50.00g                                                     
  swap           centos -wi-a-----   <3.88g                                                     
  data           pve    twi---tz--    6.34t                                                     
  data_meta0     pve    -wi-a-----   16.00g                                                     
  [data_tdata]   pve    Twi-a-----    6.34t                                                     
  [data_tmeta]   pve    ewi-a-----   16.00g                                                     
  root           pve    -wi-ao----   96.00g                                                     
  swap           pve    -wi-a-----   64.00g                                                     
  vm-104-disk-1  pve    Vwi---tz--   35.00g data                                                
  vm-130-disk-1  pve    Vwi---tz--   45.00g data                                                
  vm-132-disk-0  pve    Vwi---tz--  200.00g data                                                
  vm-141-disk-1  pve    Vwi---tz--   15.00g data                                                
  vm-144-disk-1  pve    Vwi---tz--   30.00g data                                                
  vm-146-disk-0  pve    Vwi---tz--   60.00g data                                                
  vm-147-disk-0  pve    Vwi---tz--    1.42t data                                                
  vm-158-disk-0  pve    Vwi---tz--   60.00g data                                                
  vm-161-disk-0  pve    Vwi---tz--   25.00g data                                                
  vm-171-disk-0  pve    Vwi---tz--   80.00g data                                                
  vm-172-disk-0  pve    Vwi---tz--  100.00g data                                                
  vm-173-disk-0  pve    Vwi---tz--  100.00g data                                                
  vm-175-disk-0  pve    Vwi---tz--   60.00g data                                                
  vm-179-disk-1  pve    Vwi---tz--   47.00g data                                                
  vm-182-disk-0  pve    Vwi---tz--   60.00g data                                                
  vm-190-disk-1  pve    Vwi---tz--  230.00g data                                                
  vm-210-disk-0  pve    Vwi---tz--   25.00g data                                                
  vm-243-disk-0  pve    Vwi---tz--  135.00g data                                                
  vm-249-disk-0  pve    Vwi---tz--   60.00g data                                                
  vm-252-disk-0  pve    Vwi---tz--   50.00g data                                                
  vm-3005-disk-0 pve    Vwi---tz--   70.00g data                                                
  vm-3031-disk-0 pve    Vwi---tz--   85.00g data                                                
  vm-3033-disk-0 pve    Vwi---tz--  100.00g data                                                
  vm-3034-disk-0 pve    Vwi---tz--  100.00g data                                                
  vm-3037-disk-0 pve    Vwi---tz--   60.00g data                                                
  vm-3072-disk-0 pve    Vwi---tz--   60.00g data                                                
  vm-3082-disk-0 pve    Vwi---tz--   60.00g data                                                
  vm-4002-disk-0 pve    Vwi---tz--   25.00g data                                                
  vm-5555-disk-0 pve    Vwi---tz--   40.00g data

lsblk:
Code:
root@ubuntu-server:/dev/pve# lsblk -f 
NAME                          FSTYPE      LABEL                           UUID                                   MOUNTPOINT 
loop0                         squashfs                                                                           /media/filesystem 
loop1                         squashfs                                                                            
loop2                         squashfs                                                                           /lib/modules 
loop3                         squashfs                                                                           /media/rack.lower 
loop4                         squashfs                                                                           /media/region.lower 
loop5                         squashfs                                                                           /snap/core/6350 
loop6                         squashfs                                                                           /snap/subiquity/664 
sda                                                                                                               
├─sda1                                                                                                            
├─sda2                        vfat                                        CF55-F838                               
└─sda3                        LVM2_member                                 McBQFe-zL5d-B3HQ-Q27f-UFfo-9Ejj-bDARvL  
  ├─pve-swap                  swap                                        987c23f6-bb11-492f-ae0a-a442c6420174    
  ├─pve-root                  ext4                                        f9f12ceb-7b08-408a-8dfe-312e029bb213   /mnt/PVEROOT 
  ├─pve-data_meta0                                                                                                
  ├─pve-data_tmeta                                                                                                
  └─pve-data_tdata                                                                                                
sdb                           LVM2_member                                 lmeD1C-FzsJ-Aekd-6gqy-niBs-0v5I-3ajulg  
└─RAID10-vm--188--disk--0                                                                                         
  ├─RAID10-vm--188--disk--0p1 xfs                                         d09f9e9e-0980-462e-bdd8-193c204e632c    
  └─RAID10-vm--188--disk--0p2 LVM2_member                                 gxdJly-8A11-CSdL-TGQg-xyBo-fxum-5BuQlH  
    ├─centos-swap             swap                                        5b45bb3c-7692-44f9-9e5a-4b0acb9862b4    
    ├─centos-home             xfs                                         1725838d-8aa2-48b0-943b-29623e260d60    
    └─centos-root             xfs                                         ff9ae317-5445-433b-8fcd-5b5f75b68dcd    
sr0                           iso9660     Ubuntu-Server 18.04.2 LTS amd64 2019-02-14-10-06-00-00                 /cdrom


Any advice on how to proceed to get that RAID10/vm-188 disk available would be greatly appreciated.
 
So it turns out the RAID10/vm-188 disk wasn't a problem.
Now I'm left with only the LVM-thin pve/data but I'm unsure how to proceed with pvmove and the pve-data-meta0 after lvconvert --repair .

I've followed some posts about this type of recovery:
https://forum.proxmox.com/threads/unable-to-resume-pve-data.12497/
https://www.unixrealm.com/?p=12000
https://support.hpe.com/hpesc/public/docDisplay?docId=mmr_kc-0126722

I am still stuck,
If there's anyone out there with some more information about recovering LVM-thin with broken metadata, I would greatly appreciate any pointers.
 
No progress on the thin volume recovery and we decided to do a block level data recovery.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!