How to recover data from raid1 disks previously installed in another Proxmox server

venethia

New Member
Jul 8, 2023
5
0
1
Hi everyone,

I'm a new forum subscriber and a 'rookie' Proxmox user (1 year).

I removed two 4TB sata disks from my ex (death) Proxmox server and I need to recover the data inside.

I tried to mount that disks for days in a new Proxmox server but I could not access them.

I tried to re-create the raid with madm and then I tried to re-mount but I failed.

I have some details, I hope someone already solved this problem.

Any help would be appreciate.

Thanks.

Code:
fdisk -l

Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: Dual SATA Bridge
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5176ABFA-49D5-4E71-B3B6-129110AE5E47

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 7814035455 7814033408  3.6T Linux RAID


Disk /dev/sdc: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: Dual SATA Bridge
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B7EA93A7-224A-4A8D-8A63-9E8D6C7B46DF

Device     Start        End    Sectors  Size Type
/dev/sdc1   2048 7814035455 7814033408  3.6T Linux RAID


Disk /dev/md3: 3.64 TiB, 4000650887168 bytes, 7813771264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9A1AD1CD-BE1B-448C-94FE-0CB94CCDE2C0

Device     Start        End    Sectors  Size Type
/dev/md3p1  2048 7813771230 7813769183  3.6T Solaris reserved 2

Code:
root@server:/mnt# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                            8:0    0 465.8G  0 disk 
├─sda1                         8:1    0  1007K  0 part 
├─sda2                         8:2    0   512M  0 part  /boot/efi
└─sda3                         8:3    0 465.3G  0 part 
  ├─pve-swap                 253:0    0     7G  0 lvm   [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm   /
  ├─pve-data_tmeta           253:2    0   3.5G  0 lvm   
  │ └─pve-data-tpool         253:4    0 339.3G  0 lvm   
  │   ├─pve-data             253:5    0 339.3G  1 lvm   
  │   ├─pve-vm--110--disk--0 253:6    0    32G  0 lvm   
  │   ├─pve-vm--119--disk--0 253:7    0    32G  0 lvm   
  │   └─pve-vm--127--disk--0 253:8    0    16G  0 lvm   
  └─pve-data_tdata           253:3    0 339.3G  0 lvm   
    └─pve-data-tpool         253:4    0 339.3G  0 lvm   
      ├─pve-data             253:5    0 339.3G  1 lvm   
      ├─pve-vm--110--disk--0 253:6    0    32G  0 lvm   
      ├─pve-vm--119--disk--0 253:7    0    32G  0 lvm   
      └─pve-vm--127--disk--0 253:8    0    16G  0 lvm   
sdb                            8:16   0   3.6T  0 disk 
└─sdb1                         8:17   0   3.6T  0 part 
  └─md3                        9:3    0   3.6T  0 raid1
    └─md3p1                  259:0    0   3.6T  0 part 
sdc                            8:32   0   3.6T  0 disk 
└─sdc1                         8:33   0   3.6T  0 part 
  └─md3                        9:3    0   3.6T  0 raid1
    └─md3p1                  259:0    0   3.6T  0 part

Code:
root@server:/mnt# mount -t ext4 /dev/md3p1 /mnt/minerbe/
mount: /mnt/minerbe: wrong fs type, bad option, bad superblock on /dev/md3p1, missing codepage or helper program, or other error.

Code:
root@server:/mnt# mount -t zfs /dev/md3p1 /mnt/minerbe/
filesystem '/dev/md3p1' cannot be mounted, unable to open the dataset

Code:
root@server:/mnt# mount /dev/md3 /mnt/minerbe
mount: /mnt/minerbe: wrong fs type, bad option, bad superblock on /dev/md3, missing codepage or helper program, or other error.

Code:
root@server:/mnt# mdadm --detail /dev/md3
/dev/md3:
           Version : 1.2
     Creation Time : Mon Feb 18 00:54:12 2019
        Raid Level : raid1
        Array Size : 3906885632 (3725.90 GiB 4000.65 GB)
     Used Dev Size : 3906885632 (3725.90 GiB 4000.65 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Jul  8 16:54:38 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : proxmox:3
              UUID : d2546b8d:6aebdfac:08264ee5:5fdcab09
            Events : 15438

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Code:
root@server:/mnt# cat /proc/mdstat 
Personalities : [raid1] 
md3 : active raid1 sdb1[0] sdc1[1]
      3906885632 blocks super 1.2 [2/2] [UU]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>
 
There are many people more experts than me than probably can give you better answers, but for similar situations I simply avoid to complicate my life in trying to mount a PVE disks into another PVE system (LVM conflicts & c. in my experience), and simply access the disk(s) from a sysrescue boot.

P.
 
what do these commands show:
fsck -N /dev/md3p1
lsblk -f
blkid /dev/md3p1
file -sL /dev/md3p1


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

1)

Code:
root@server:/mnt# fsck -N /dev/md3p1
fsck from util-linux 2.36.1

2)

Code:
root@server:/mnt# lsblk -f
NAME                         FSTYPE            FSVER    LABEL     UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda                                                                                                                     
├─sda1                                                                                                                 
├─sda2                       vfat              FAT32              D503-F0FA                               510.7M     0% /boot/efi
└─sda3                       LVM2_member       LVM2 001           3fco9E-ZIpV-75c0-eIlV-Guc0-MNrP-tY1P4G               
  ├─pve-swap                 swap              1                  a5ed9b86-b9a4-45d2-9848-87a9d3f3a422                  [SWAP]
  ├─pve-root                 ext4              1.0                04047cb6-2030-4bc0-8ff2-668c5abaaddb     81.7G     8% /
  ├─pve-data_tmeta                                                                                                     
  │ └─pve-data-tpool                                                                                                   
  │   ├─pve-data                                                                                                       
  │   ├─pve-vm--110--disk--0                                                                                           
  │   ├─pve-vm--119--disk--0                                                                                           
  │   └─pve-vm--127--disk--0                                                                                           
  └─pve-data_tdata                                                                                                     
    └─pve-data-tpool                                                                                                   
      ├─pve-data                                                                                                       
      ├─pve-vm--110--disk--0                                                                                           
      ├─pve-vm--119--disk--0                                                                                           
      └─pve-vm--127--disk--0                                                                                           
sdb                                                                                                                     
└─sdb1                       linux_raid_member 1.2      proxmox:3 d2546b8d-6aeb-dfac-0826-4ee55fdcab09                 
  └─md3                                                                                                                 
    └─md3p1                  LVM2_member       LVM2 001           RKEcLx-J9dJ-ZsdK-bO1w-K6t6-UhbH-naw2gy               
sdc                                                                                                                     
└─sdc1                       linux_raid_member 1.2      proxmox:3 d2546b8d-6aeb-dfac-0826-4ee55fdcab09                 
  └─md3                                                                                                                 
    └─md3p1                  LVM2_member       LVM2 001           RKEcLx-J9dJ-ZsdK-bO1w-K6t6-UhbH-naw2gy

3)

Code:
root@server:/mnt# blkid /dev/md3p1
/dev/md3p1: UUID="RKEcLx-J9dJ-ZsdK-bO1w-K6t6-UhbH-naw2gy" TYPE="LVM2_member" PARTUUID="23478340-8b90-4dae-90ce-28941f906b01"

4)

Code:
root@server:/mnt# file -sL /dev/md3p1
/dev/md3p1: LVM2 PV (Linux Logical Volume Manager), UUID: RKEcLx-J9dJ-ZsdK-bO1w-K6t6-UhbH-naw2gy, size: 4000649821696
 
since the volume group created by proxmox is always named pve, the old vg would not activate by default. you'd need to rename the old vg before you can activate it at the same time as the new one. to do that, you need to get the uuid of the old vg, like so:

vgs -o +vg_uuid

once you get the uuid, you can rename the old vg like so:

vgrename [uuid] pve_old (or whatever)


then you'd be able to activate and mount the logical volume(s)
 
  • Like
Reactions: bbgeek17
since the volume group created by proxmox is always named pve, the old vg would not activate by default. you'd need to rename the old vg before you can activate it at the same time as the new one. to do that, you need to get the uuid of the old vg, like so:

vgs -o +vg_uuid

once you get the uuid, you can rename the old vg like so:

vgrename [uuid] pve_old (or whatever)


then you'd be able to activate and mount the logical volume(s)

This is the result: it seems I already have a volume group named vg0.

Code:
root@server:/mnt# vgs -o +vg_uuid
  VG  #PV #LV #SN Attr   VSize    VFree   VG UUID                             
  pve   1   6   0 wz--n- <465.26g <16.00g 83XWj7-1QvZ-w0VA-ZHbM-OOmf-9Kqi-1pYTtQ
  vg0   1   0   0 wz--n-   <3.64t  <3.64t QPAmTd-cH1h-O7aW-XPNe-zw17-Q1XW-hznzwd

Need I to rename it?

Anyway, it's strange that Vsize is equal to VFree. Maybe because there isn't any LV?
 
Last edited:
There are many people more experts than me than probably can give you better answers, but for similar situations I simply avoid to complicate my life in trying to mount a PVE disks into another PVE system (LVM conflicts & c. in my experience), and simply access the disk(s) from a sysrescue boot.

P.

File system type is not recognized from any live distro.
 
These are not the disk with the previous Proxmox, that was in a single 250GB NVME. These are the disk with data to recover.
 
I'd agree, if the RAID group is assembled (which it seems it is), the volume group needs to be activated. If the activation is failing there should be a message in the journal/dmesg. Once the volume is activated, you should see LVs via "lvs". Then you may be able to mount those disks, depends on what is on them.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!