RAID Vanished after upgrade

broadband9

Member
Jun 12, 2020
2
0
6
31
Everything is working fine on 6.3.3 version, I hit upgrade, and reboot.

I have two raid arrays:

500 + 500 NVME = raid 1 software

4tb + 4tb = raid 1 software

md5 which held 2 x 4tb in it has completely vainshed and the seperate disks are ready to initialize!

my md5 array was mounted on /mnt/pve

GUI does not show the files inside the mount.

Half of my Vms are stored on the nvmes so i'm ok for them, but the other VMs are on the Larger disks.

Output:

Code:
root@PM01:/dev# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md4 : active raid1 nvme0n1p4[0] nvme1n1p4[1]
      477557696 blocks [2/2] [UU]
      bitmap: 3/4 pages [12KB], 65536KB chunk

md2 : active raid1 nvme0n1p2[0] nvme1n1p2[1]
      20970432 blocks [2/2] [UU]

unused devices: <none>


lsblk showing the sda and sdb but not associated with any raid device.
Code:
root@PM01:/dev# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0            7:0    0    25G  0 loop
sda              8:0    0   3.7T  0 disk
sdb              8:16   0   3.7T  0 disk
sr0             11:0    1  1024M  0 rom
nvme0n1        259:0    0   477G  0 disk
├─nvme0n1p1    259:2    0   511M  0 part
├─nvme0n1p2    259:3    0    20G  0 part
│ └─md2          9:2    0    20G  0 raid1 /
├─nvme0n1p3    259:4    0  1023M  0 part  [SWAP]
├─nvme0n1p4    259:5    0 455.4G  0 part
│ └─md4          9:4    0 455.4G  0 raid1
│   └─pve-data 253:0    0 451.4G  0 lvm   /var/lib/vz
└─nvme0n1p5    259:6    0     1M  0 part
nvme1n1        259:1    0   477G  0 disk
├─nvme1n1p1    259:7    0   511M  0 part  /boot/efi
├─nvme1n1p2    259:8    0    20G  0 part
│ └─md2          9:2    0    20G  0 raid1 /
├─nvme1n1p3    259:9    0  1023M  0 part  [SWAP]
└─nvme1n1p4    259:10   0 455.4G  0 part
  └─md4          9:4    0 455.4G  0 raid1
    └─pve-data 253:0    0 451.4G  0 lvm   /var/lib/vz

Please help me rebuild this MD5 array so I can ge the VMs back up and running.

I would really appreciate it and really donp't like the fact proxmox just deleted my raid array after a simple reboot!
 
I would really appreciate it and really donp't like the fact proxmox just deleted my raid array after a simple reboot!
Mdraid isn't supported by proxmox, so you can't really complain about a proxmox update destroying your array.
Proxmox is only supporting Ceph and ZFS if you want software raid because mdraid is missing all the data integrety features.

Atleast here Proxmox 6.3 is still working fine with mdraid, so thats not a general problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!