Grub Rescue Recover Data

Jonpaulh

New Member
Nov 16, 2017
15
0
1
44
Yesterday my electric was cut and my UPS's ran down, finally my proxmox server lost power. Once the power was restored all other server came back online except one. Currently when booting I get to the grub rescue shell.

Code:
error: unknown filesystem
Entering rescue mode...

I see the following with ls

Code:
grub rescue> ls
(hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) (lvm/pve-root) (lvm/pve-swap)

I have tried doing an ls with each one and I get the same result regardless
Code:
grub rescue> ls (lvm/pve-root)
(lvm/pve-root): Filesystem is unknown
grub rescue> ls (lvm/pve-root)/
error: unknown filesystem

I have booted into a live ubuntu CD, checking fdisk -l

Code:
# fdisk -l
Disk /dev/sda: 931.5GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physcial): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7C6C8CF5-63CC-497D-A1D9-A3108D2E36DA

Device        Start        End        Sectors        Size        Type
/dev/sda1    34        2047        2014        10007K        BIOS boot
/dev/sda2    2048        262143        260096        127M        EFI System
/dev/sda3    262144        1953525134    1953262991    931.4G        Linux LVM

Partition 1 does not start on a physical sector boundary

Disk /dev/mapper/pve-swap: 8GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum,optimal): 4096 bytes / 4096 bytes

Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum,optimal): 4096 bytes / 4096 bytes

Code:
# parted /dev/sda p
Model: ATA TOSHIBA HDWJ110 (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number        start         End        size         File system     Name    Flags
1             17.4kB        1049kB    1031kB                                bios_grub
2             1049kB        134MB    133MB        fat32                    boot, esp
3             134MB        1000GB    1000GB                                lvm

There is some more info below, but I include the main parts
Code:
# pvs;lvs
PV             VG         Fmt        Attr         PSize         PFree
/dev/sda3    pve     lvm2     a--         <931.39g    15.79g
LV             VG         Fmt        Attr         PSize         PFree
data
root
swap
vm-103-disk-1
vm-104-disk-1
vm-113-disk-1
vm-134-disk-1

It appears the disks are there, what would be the best way to backup the vm's or repair the disk?
 
Last edited:
Hello there,

Have you actually try mounting the volumes with the Live ISO and physically check if the data is there?

Regards
 
Hello there,

Have you actually try mounting the volumes with the Live ISO and physically check if the data is there?

Regards



I have tried a few more things and I include the outputs here

Code:
root@ubuntu-server:~# pvs
  WARNING: Not using lvmetad because a repair command was run.
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sda3  pve lvm2 a--  <931.39g 15.79g
root@ubuntu-server:~#
root@ubuntu-server:~# lvs
  WARNING: Not using lvmetad because a repair command was run.
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi---tz-- 811.39g
  root          pve -wi-a-----  96.00g
  swap          pve -wi-a-----   8.00g
  vm-103-disk-1 pve Vwi---tz-- 200.00g data
  vm-104-disk-1 pve Vwi---tz-- 120.00g data
  vm-113-disk-1 pve Vwi---tz-- 200.00g data
  vm-134-disk-1 pve Vwi---tz-- 100.00g data
root@ubuntu-server:~#
root@ubuntu-server:~# mkdir /media/test
root@ubuntu-server:~# mount /dev/v
vcs          vcs2         vcs4         vcs6         vcsa1        vcsa3        vcsa5        vfio/        vhci         vhost-vsock
vcs1         vcs3         vcs5         vcsa         vcsa2        vcsa4        vcsa6        vga_arbiter  vhost-net
root@ubuntu-server:~# mount /dev/sda /media/test/
mount: /media/test: /dev/sda already mounted or mount point busy.
root@ubuntu-server:~# mount /dev/sda3 /media/test/
mount: /media/test: unknown filesystem type 'LVM2_member'.
root@ubuntu-server:~#
root@ubuntu-server:~# vgchange -a y pve
  WARNING: Not using lvmetad because a repair command was run.
  /usr/sbin/thin_check: execvp failed: No such file or directory
  Check of pool pve/data failed (status:2). Manual repair required!
  /usr/sbin/thin_check: execvp failed: No such file or directory
  /usr/sbin/thin_check: execvp failed: No such file or directory
  /usr/sbin/thin_check: execvp failed: No such file or directory
  /usr/sbin/thin_check: execvp failed: No such file or directory
  2 logical volume(s) in volume group "pve" now active
root@ubuntu-server:~#
root@ubuntu-server:~# pvscan
  PV /dev/sda3   VG pve             lvm2 [<931.39 GiB / 15.79 GiB free]
  Total: 1 [<931.39 GiB] / in use: 1 [<931.39 GiB] / in no VG: 0 [0   ]
root@ubuntu-server:~#
root@ubuntu-server:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               <931.39 GiB / not usable 1.69 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              238435
  Free PE               4043
  Allocated PE          234392
  PV UUID               ie3JBl-wEPn-2bOM-1W22-mNyv-nhp2-SbIBCM

root@ubuntu-server:~#
root@ubuntu-server:~# lvscan
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [96.00 GiB] inherit
  inactive          '/dev/pve/data' [811.39 GiB] inherit
  inactive          '/dev/pve/vm-134-disk-1' [100.00 GiB] inherit
  inactive          '/dev/pve/vm-104-disk-1' [120.00 GiB] inherit
  inactive          '/dev/pve/vm-103-disk-1' [200.00 GiB] inherit
  inactive          '/dev/pve/vm-113-disk-1' [200.00 GiB] inherit
root@ubuntu-server:~#
root@ubuntu-server:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                aXsqn2-6nBs-n2u5-s40U-n09N-2wSr-c6jvzs
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-11-10 09:08:42 +0000
  LV Status              available
  # open                 0
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                kHE9Oj-ST01-jcyy-UqwJ-ZpXN-Oaon-vr1VYO
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-11-10 09:08:43 +0000
  LV Status              available
  # open                 0
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                AC6UqS-o0ZC-UwUy-MDRZ-BFYc-3OfO-7VV35K
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-11-10 09:08:45 +0000
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              NOT available
  LV Size                811.39 GiB
  Current LE             207716
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/pve/vm-134-disk-1
  LV Name                vm-134-disk-1
  VG Name                pve
  LV UUID                VZI8m5-D7rd-ybcC-yOnC-C6Lj-yHgH-N2KH4J
  LV Write Access        read/write
  LV Creation host, time vm06, 2017-10-24 08:59:08 +0000
  LV Pool name           data
  LV Status              NOT available
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/pve/vm-104-disk-1
  LV Name                vm-104-disk-1
  VG Name                pve
  LV UUID                wErXRU-moVR-Ya2T-qxJt-M4T8-nzGe-Tk1nT5
  LV Write Access        read/write
  LV Creation host, time vm06, 2018-06-20 13:54:47 +0000
  LV Pool name           data
  LV Status              NOT available
  LV Size                120.00 GiB
  Current LE             30720
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/pve/vm-103-disk-1
  LV Name                vm-103-disk-1
  VG Name                pve
  LV UUID                Ks1lyq-fgAo-8Pqj-DGrF-91hh-ApDS-lRo3uP
  LV Write Access        read/write
  LV Creation host, time vm06, 2018-06-20 14:06:14 +0000
  LV Pool name           data
  LV Status              NOT available
  LV Size                200.00 GiB
  Current LE             51200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/pve/vm-113-disk-1
  LV Name                vm-113-disk-1
  VG Name                pve
  LV UUID                JrMvsW-9s0p-4hTU-LHCO-PErZ-dQ00-zIxNOL
  LV Write Access        read/write
  LV Creation host, time vm06, 2019-04-25 11:10:06 +0000
  LV Pool name           data
  LV Status              NOT available
  LV Size                200.00 GiB
  Current LE             51200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

root@ubuntu-server:~#
root@ubuntu-server:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0          7:0    0 495.4M  1 loop /media/filesystem
loop1          7:1    0 149.5M  1 loop
loop2          7:2    0    37M  1 loop /media/region.lower
loop3          7:3    0  21.5M  1 loop /media/rack.lower
loop4          7:4    0  86.6M  1 loop /snap/core/4486
loop5          7:5    0  44.6M  1 loop /snap/subiquity/346
sda            8:0    0 931.5G  0 disk
├─sda1         8:1    0  1007K  0 part
├─sda2         8:2    0   127M  0 part
└─sda3         8:3    0 931.4G  0 part
  ├─pve-swap 253:0    0     8G  0 lvm
  └─pve-root 253:1    0    96G  0 lvm
sr0           11:0    1   806M  0 rom  /cdrom
root@ubuntu-server:~#
root@ubuntu-server:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.8G     0  7.8G   0% /dev
tmpfs           1.6G  1.4M  1.6G   1% /run
/dev/sr0        806M  806M     0 100% /cdrom
/dev/loop0      496M  496M     0 100% /rofs
/cow            7.9G  166M  7.7G   3% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs           7.9G     0  7.9G   0% /tmp
/dev/loop3       22M   22M     0 100% /media/rack.lower
overlay          22M   22M     0 100% /media/rack
/dev/loop2       38M   38M     0 100% /media/region.lower
overlay          38M   38M     0 100% /media/region
/dev/loop4       87M   87M     0 100% /snap/core/4486
/dev/loop5       45M   45M     0 100% /snap/subiquity/346
tmpfs           1.6G     0  1.6G   0% /run/user/999
tmpfs           1.6G     0  1.6G   0% /run/user/0
root@ubuntu-server:~#
root@ubuntu-server:~# vgchange -a y pve
  /usr/sbin/thin_check: execvp failed: No such file or directory
  Check of pool pve/data failed (status:2). Manual repair required!
  /usr/sbin/thin_check: execvp failed: No such file or directory
  /usr/sbin/thin_check: execvp failed: No such file or directory
  /usr/sbin/thin_check: execvp failed: No such file or directory
  /usr/sbin/thin_check: execvp failed: No such file or directory
  2 logical volume(s) in volume group "pve" now active
root@ubuntu-server:~#
root@ubuntu-server:~#
root@ubuntu-server:~# lvchange -a y pve/data
  /usr/sbin/thin_check: execvp failed: No such file or directory
  Check of pool pve/data failed (status:2). Manual repair required!
root@ubuntu-server:~#
root@ubuntu-server:~# lvconvert --repair pve/data
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  /usr/sbin/thin_repair: execvp failed: No such file or directory
  Repair of thin metadata volume of thin pool pve/data failed (status:2). Manual repair required!
root@ubuntu-server:~#
root@ubuntu-server:~# lvs -a
  WARNING: Not using lvmetad because a repair command was run.
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi---tz-- 811.39g
  [data_tdata]    pve Twi------- 811.39g
  [data_tmeta]    pve ewi------- 104.00m
  [lvol0_pmspare] pve ewi------- 104.00m
  root            pve -wi-a-----  96.00g
  swap            pve -wi-a-----   8.00g
  vm-103-disk-1   pve Vwi---tz-- 200.00g data
  vm-104-disk-1   pve Vwi---tz-- 120.00g data
  vm-113-disk-1   pve Vwi---tz-- 200.00g data
  vm-134-disk-1   pve Vwi---tz-- 100.00g data
root@ubuntu-server:~#
 
It might just be the metadata that is corrupt, did it shut down safely or just abrupt power cut? With that said, I pretty sure it's easy to create a new metadata, how? I have frequent power cuts without UPC and I've never personally experience this myself, but I am pretty sure the data is intact, you probably need just to re the metadata. That LVM needs!
 
  • Like
Reactions: Jonpaulh
It might just be the metadata that is corrupt, did it shut down safely or just abrupt power cut? With that said, I pretty sure it's easy to create a new metadata, how? I have frequent power cuts without UPC and I've never personally experience this myself, but I am pretty sure the data is intact, you probably need just to re the metadata. That LVM needs!

The power just cut so it was abrupt. I have seend this a couple of times with power loss, I assume during a write operation when the power cuts it causes a corruption. The output does make me hopeful I can recover somehow, but I cannot find anything on how to proceed. I will look at metadata and how to create, any pointers or links would be greatly appreciated. Many thanks for taking the time to respond.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!