Trying to recover data; can't mount containers (missing codepage or helper program or other error)

mackhax0r

Renowned Member
Jul 2, 2015
11
1
68
Hello,

I recently had hardware failure (RAID). I ran the server for a few days like this before the hardware replacement came in. I ended up doing a fresh install (it was overdue!). But there is some data I'd like to recover, but I'm not having luck. I am using System Rescue.

I can see all of my container (/dev/sdb) (NOT pveold)
Bash:
[root@sysrescue ~]# lsblk
NAME                            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0                               2:0    1     4K  0 disk
loop0                             7:0    0 637.5M  1 loop /run/archiso/sfs/airootfs
sda                               8:0    0   127G  0 disk
├─sda1                            8:1    0  1007K  0 part
├─sda2                            8:2    0   512M  0 part
└─sda3                            8:3    0 126.5G  0 part
  ├─pveold-swap                 254:0    0     8G  0 lvm
  ├─pveold-root                 254:1    0  31.5G  0 lvm
  ├─pveold-data_tmeta           254:2    0     1G  0 lvm
  │ └─pveold-data-tpool         254:4    0  69.2G  0 lvm
  │   ├─pveold-data             254:5    0  69.2G  1 lvm
  │   └─pveold-vm--110--disk--0 254:6    0     8G  0 lvm
  └─pveold-data_tdata           254:3    0  69.2G  0 lvm
    └─pveold-data-tpool         254:4    0  69.2G  0 lvm
      ├─pveold-data             254:5    0  69.2G  1 lvm
      └─pveold-vm--110--disk--0 254:6    0     8G  0 lvm
sdb                               8:16   0   2.7T  0 disk
├─sdb1                            8:17   0   256M  0 part
└─sdb2                            8:18   0   2.7T  0 part
  ├─pve-swap                    254:7    0     8G  0 lvm
  ├─pve-root                    254:8    0 103.3G  0 lvm
  ├─pve-data_tmeta              254:9    0    84M  0 lvm
  │ └─pve-data-tpool            254:11   0   2.6T  0 lvm
  │   ├─pve-data                254:12   0   2.6T  1 lvm
  │   ├─pve-vm--100--disk--1    254:13   0    40G  0 lvm
  │   ├─pve-vm--1000--disk--1   254:14   0    50G  0 lvm
  │   ├─pve-vm--300--disk--1    254:15   0    60G  0 lvm
  │   ├─pve-vm--102--disk--1    254:16   0    10G  0 lvm
  │   ├─pve-vm--103--disk--1    254:17   0    50G  0 lvm
  │   ├─pve-vm--107--disk--0    254:18   0 100.5G  0 lvm
  │   ├─pve-vm--107--disk--1    254:19   0   100G  0 lvm
  │   ├─pve-vm--100--disk--0    254:20   0   100G  0 lvm
  │   ├─pve-vm--106--disk--0    254:21   0    15G  0 lvm
  │   ├─pve-vm--101--disk--0    254:22   0    10G  0 lvm
  │   ├─pve-vm--200--disk--0    254:23   0     5G  0 lvm
  │   ├─pve-vm--2001--disk--0   254:24   0    20G  0 lvm
  │   ├─pve-vm--104--disk--0    254:25   0     8G  0 lvm
  │   ├─pve-vm--500--disk--1    254:26   0     8G  0 lvm
  │   ├─pve-vm--111--disk--0    254:27   0     8G  0 lvm
  │   ├─pve-vm--2002--disk--0   254:28   0    20G  0 lvm
  │   ├─pve-vm--400--disk--0    254:29   0   100G  0 lvm
  │   ├─pve-vm--105--disk--0    254:30   0    20G  0 lvm
  │   ├─pve-vm--109--disk--0    254:31   0     8G  0 lvm
  │   └─pve-vm--6000--disk--0   254:32   0     8G  0 lvm
  └─pve-data_tdata              254:10   0   2.6T  0 lvm
    └─pve-data-tpool            254:11   0   2.6T  0 lvm
      ├─pve-data                254:12   0   2.6T  1 lvm
      ├─pve-vm--100--disk--1    254:13   0    40G  0 lvm
      ├─pve-vm--1000--disk--1   254:14   0    50G  0 lvm
      ├─pve-vm--300--disk--1    254:15   0    60G  0 lvm
      ├─pve-vm--102--disk--1    254:16   0    10G  0 lvm
      ├─pve-vm--103--disk--1    254:17   0    50G  0 lvm
      ├─pve-vm--107--disk--0    254:18   0 100.5G  0 lvm
      ├─pve-vm--107--disk--1    254:19   0   100G  0 lvm
      ├─pve-vm--100--disk--0    254:20   0   100G  0 lvm
      ├─pve-vm--106--disk--0    254:21   0    15G  0 lvm
      ├─pve-vm--101--disk--0    254:22   0    10G  0 lvm
      ├─pve-vm--200--disk--0    254:23   0     5G  0 lvm
      ├─pve-vm--2001--disk--0   254:24   0    20G  0 lvm
      ├─pve-vm--104--disk--0    254:25   0     8G  0 lvm
      ├─pve-vm--500--disk--1    254:26   0     8G  0 lvm
      ├─pve-vm--111--disk--0    254:27   0     8G  0 lvm
      ├─pve-vm--2002--disk--0   254:28   0    20G  0 lvm
      ├─pve-vm--400--disk--0    254:29   0   100G  0 lvm
      ├─pve-vm--105--disk--0    254:30   0    20G  0 lvm
      ├─pve-vm--109--disk--0    254:31   0     8G  0 lvm
      └─pve-vm--6000--disk--0   254:32   0     8G  0 lvm
sdc                               8:32   0 465.8G  0 disk
├─sdc1                            8:33   0    20G  0 part
├─sdc2                            8:34   0    20G  0 part
├─sdc3                            8:35   0    40G  0 part
├─sdc4                            8:36   0   800K  0 part
└─sdc5                            8:37   0    51G  0 part
sr0                              11:0    1   699M  0 rom  /run/archiso/bootmnt

However, if I try to mount *any* container I get:

Bash:
[root@sysrescue ~]# mount /dev/pve/vm-100-disk-0 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--100--disk--0, missing codepage or helper program, or other error.
[root@sysrescue ~]# mount /dev/pve/root /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/mapper/pve-root, missing codepage or helper program, or other error.

Here are some outputs.

Bash:
 --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                S1EIQv-izMt-oNNS-9AvM-QHKB-qHuR-w2rfWC
  LV Write Access        read/write
  LV Creation host, time proxmox, 2018-05-14 00:46:14 +0000
  LV Status              available
  # open                 0
  LV Size                <103.29 GiB
  Current LE             26442
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:8

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                cNv5fX-iodb-8qut-dGJ2-QDri-SSZ2-66pw7g
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2018-05-14 00:46:14 +0000
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 21
  LV Size                2.61 TiB
  Allocated pool data    8.22%
  Allocated metadata     14.34%
  Current LE             684605
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:11
 
  [root@sysrescue ~]# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3659
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                25
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <2.73 TiB
  PE Size               4.00 MiB
  Total PE              715324
  Alloc PE / Size       713137 / 2.72 TiB
  Free  PE / Size       2187 / 8.54 GiB
  VG UUID               az9Ict-9phq-H2Gc-2teM-obnG-XCNN-9YgW7a
 
 
[root@sysrescue ~]# ls -lah /dev/mapper/
total 0
drwxr-xr-x  2 root root     720 Feb 23 06:06 .
drwxr-xr-x 20 root root    4.2K Feb 23 06:07 ..
crw-------  1 root root 10, 236 Feb 23 06:06 control
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-data -> ../dm-12
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-data_tdata -> ../dm-10
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pve-data_tmeta -> ../dm-9
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-data-tpool -> ../dm-11
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pveold-data -> ../dm-5
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pveold-data_tdata -> ../dm-3
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pveold-data_tmeta -> ../dm-2
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pveold-data-tpool -> ../dm-4
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pveold-root -> ../dm-1
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pveold-swap -> ../dm-0
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pveold-vm--110--disk--0 -> ../dm-6
lrwxrwxrwx  1 root root       7 Feb 23 06:09 pve-root -> ../dm-8
lrwxrwxrwx  1 root root       7 Feb 23 06:06 pve-swap -> ../dm-7
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--1000--disk--1 -> ../dm-14
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--100--disk--0 -> ../dm-20
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--100--disk--1 -> ../dm-13
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--101--disk--0 -> ../dm-22
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--102--disk--1 -> ../dm-16
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--103--disk--1 -> ../dm-17
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--104--disk--0 -> ../dm-25
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--105--disk--0 -> ../dm-30
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--106--disk--0 -> ../dm-21
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--107--disk--0 -> ../dm-18
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--107--disk--1 -> ../dm-19
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--109--disk--0 -> ../dm-31
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--111--disk--0 -> ../dm-27
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--2001--disk--0 -> ../dm-24
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--2002--disk--0 -> ../dm-28
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--200--disk--0 -> ../dm-23
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--300--disk--1 -> ../dm-15
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--400--disk--0 -> ../dm-29
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--500--disk--1 -> ../dm-26
lrwxrwxrwx  1 root root       8 Feb 23 06:06 pve-vm--6000--disk--0 -> ../dm-32

[root@sysrescue ~]# fsck /dev/mapper/pve-root
fsck from util-linux 2.36.1
e2fsck 1.45.6 (20-Mar-2020)
The filesystem size (according to the superblock) is 29174784 blocks
The physical size of the device is 27076608 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>?
e2fsck 1.45.6 (20-Mar-2020)
The filesystem size (according to the superblock) is 29174784 blocks
The physical size of the device is 27076608 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? no
/dev/mapper/pve-root contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Error reading block 27262976 (Invalid argument) while reading inode and block bitmaps.  Ignore error<y>?


I've everything I can think of to get this back up. I've run out of ideas. Any suggestions?
 
Further, when I try to boot the OS I get error (sorry for screenshot):
1614061713835.png

I see that the blocks are off (29174784, should be 27076608) but nothing I do lets me correct it.