Prox mdmda /dev/md0 disapears after reboot

CaZaE

New Member
Oct 30, 2017
11
0
1
41
Hello,

Proxmox 4.4 / prox instal on 2x32go SSD in raid 1 + 3x 4to in raid 5

After a bad crash I would like to re-construct my server data :

I have a raid 5 made with mdmda I have enabled mdmda found in the lvm.conf.

After I built my raid 5 (3x 4to) I could see it / make my pv / my vg and all the lv i need .
The probleme is like my title : After a reboot nothing appear ....

If I re-try to build another raid 5 he told me :

Code:
root@pve:~# mdadm --create /dev/md0 --level=5 --assume-clean --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid5 devices=3 ctime=Mon Oct 30 21:38:56 2017
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid5 devices=3 ctime=Mon Oct 30 21:38:56 2017
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid5 devices=3 ctime=Mon Oct 30 21:38:56 2017
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array

Where I fail ? I need to add it in some special files ?

Thank's in advance for your help !
 

CaZaE

New Member
Oct 30, 2017
11
0
1
41
Hi,

I have try another thing : convert my lv in tv thin add it in Proxmox GUI and after that start a VM on it : perfect /works and i could use my vm perfectly .

I try to reboot : all my lv / vg disappear ... and my VM told me some error message :

kvm: -drive file=/dev/vgpool/vm-104-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on: Could not open '/dev/vgpool/vm-104-disk-1': No such file or directory
TASK ERROR: start failed: command '/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'type=1,uuid=2474fe4d-b667-48de-aaf7-270cd7caa5d0' -name OMV2 -smp '4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga cirrus -vnc unix:/var/run/qemu-server/104.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 1024 -k fr -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:274dbd733da2' -drive 'file=/var/lib/vz/template/iso/openmediavault_3.0.58-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/vgpool/vm-104-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=0A:E6:93:F0:ED:56,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1

Any help ?
 

CaZaE

New Member
Oct 30, 2017
11
0
1
41
Today try to mount one and change nothing .

Nobody have the same problem ?
 

CaZaE

New Member
Oct 30, 2017
11
0
1
41
Hi , try to vgcfgrestore :

Code:
root@pve:~# vgcfgrestore --force vgpool
  Couldn't find device with uuid owUpRe-63Cj-Y3F7-yMYR-xftd-V1ut-1tYJyW.
  WARNING: Forced restore of Volume Group vgpool with thin volumes.
  Cannot restore Volume Group vgpool with 1 PVs marked as missing.
  Restore failed.

I could found my vgpool in the /etc/lvm/backup/vgpool :

vgpool {
id = "QuGt2z-kud7-SJMK-G17T-2DU0-G2em-fhN2kq"
seqno = 13
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0

physical_volumes {

pv0 {
id = "owUpRe-63Cj-Y3F7-yMYR-xftd-V1ut-1tYJyW"
device = "/dev/md0" # Hint only

status = ["ALLOCATABLE"]
flags = []
dev_size = 15627548672 # 7.27714 Terabytes
pe_start = 2048
pe_count = 1907659 # 7.27714 Terabytes
}
}

logical_volumes {

lvTest {
id = "cJfIwW-Xu6C-2nBq-OljY-2te9-nD1d-t9GK03"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "pve"
creation_time = 1509469645 # 2017-10-31 18:07:25 +$
segment_count = 1

segment1 {

All my lvm part are here ...
 

CaZaE

New Member
Oct 30, 2017
11
0
1
41
I have maybe a solution :

echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
echo "HOMEHOST fileserver" >> /etc/mdadm/mdadm.conf
echo "MAILADDR youruser@gmail.com" >> /etc/mdadm/mdadm.conf
mdadm --detail --scan | cut -d " " -f 4 --complement >> /etc/mdadm/mdadm.conf

update-initramfs -u
 

n0ll

Member
Nov 11, 2020
5
0
6
80
Easy! You need to make sure you are creating RAID paritions using the actual partition.
Create partitions at /dev/sda1 and /dev/sdb1
Do not just use /dev/sda or /dev/sdb - a lot of guides on the internet show the wrong "/dev/sda and /dev/sdb". You must use "/dev/sda1" including the 1!

Code:
wipefs --all --force /dev/sda /dev/sdb
cfdisk /dev/sda - > create new partition and select type = raid / write changes
cfdisk /dev/sdb - > create new partition and select type = raid / write changes
#Use mdadm create command including the partition number!
mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1

It's a weird bug but its an easy fix!
 
Last edited:

n0ll

Member
Nov 11, 2020
5
0
6
80
Hello,

Proxmox 4.4 / prox instal on 2x32go SSD in raid 1 + 3x 4to in raid 5

After a bad crash I would like to re-construct my server data :

I have a raid 5 made with mdmda I have enabled mdmda found in the lvm.conf.

After I built my raid 5 (3x 4to) I could see it / make my pv / my vg and all the lv i need .
The probleme is like my title : After a reboot nothing appear ....

If I re-try to build another raid 5 he told me :

Code:
root@pve:~# mdadm --create /dev/md0 --level=5 --assume-clean --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid5 devices=3 ctime=Mon Oct 30 21:38:56 2017
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid5 devices=3 ctime=Mon Oct 30 21:38:56 2017
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid5 devices=3 ctime=Mon Oct 30 21:38:56 2017
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array

Where I fail ? I need to add it in some special files ?

Thank's in advance for your help !

"/dev/sdb /dev/sdc /dev/sdd" must be changed to "/dev/sdb1 /dev/sdc1 /dev/sdd1"
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!