[SOLVED] Design question about using OMV VM to create RAID via passthrough disks.

Muddayuck

Member
Apr 5, 2022
2
0
6
I have a question about design passing through 10 disks to an OpenMediaVault VM using RAID6 on the VM.

When creating this RAID6 on the OMV VM everything works correctly. Something that got me thinking about long-term recovery was when I had to restore my OMV VM, then manually repassing through my 10 disks, except I forgot to mount 1/10 disks which rendered OMV to see the RAID6 filesystem "Missing". I then fixed it by adding the last disk and OMV was able to show RAID6 filesystem "Online"

My main question is if I were to lose a disk on the PVE host, how would I be able to bring my filesystem back online?

Here is an output using 'ls -n /dev/disk/by-id' which should show all drives and RAID parts:
***Note all 10 disks setup with RAID6 via OMV are (sda to sdj)***
root@px17 ~ # ls -n /dev/disk/by-id total 0 lrwxrwxrwx 1 0 0 9 Jul 7 03:50 ata-ST16000NM001J-2TW113_ZR6021LH -> ../../sda lrwxrwxrwx 1 0 0 9 Jul 7 04:34 ata-ST16000NM001J-2TW113_ZR6021MK -> ../../sdh lrwxrwxrwx 1 0 0 9 Jul 7 04:34 ata-ST16000NM001J-2TW113_ZR6021N5 -> ../../sdf lrwxrwxrwx 1 0 0 9 Jul 7 22:44 ata-ST16000NM001J-2TW113_ZR6021RG -> ../../sdj lrwxrwxrwx 1 0 0 9 Jul 7 03:50 ata-ST16000NM001J-2TW113_ZR6021SL -> ../../sdb lrwxrwxrwx 1 0 0 9 Jul 7 04:34 ata-ST16000NM001J-2TW113_ZR6022AT -> ../../sdg lrwxrwxrwx 1 0 0 9 Jul 7 04:34 ata-ST16000NM001J-2TW113_ZR7024B1 -> ../../sdc lrwxrwxrwx 1 0 0 9 Jul 7 04:34 ata-ST16000NM001J-2TW113_ZR70252Y -> ../../sdd lrwxrwxrwx 1 0 0 9 Jul 7 04:34 ata-ST16000NM001J-2TW113_ZR702587 -> ../../sde lrwxrwxrwx 1 0 0 9 Jul 7 04:34 ata-ST16000NM001J-2TW113_ZR7025HZ -> ../../sdi lrwxrwxrwx 1 0 0 10 Jul 7 03:50 dm-name-vg0-root -> ../../dm-0 lrwxrwxrwx 1 0 0 10 Jul 7 03:50 dm-name-vg0-swap -> ../../dm-1 lrwxrwxrwx 1 0 0 10 Jul 7 03:50 dm-uuid-LVM-9aFKuMey6ADpOxY8TRtVpjFfXxwlizuPCtwwPMsIXh4mUO3Q85Za6MU2QDJydkER -> ../../dm-0 lrwxrwxrwx 1 0 0 10 Jul 7 03:50 dm-uuid-LVM-9aFKuMey6ADpOxY8TRtVpjFfXxwlizuPNgsiGfc9IwPmOwm15J8QrchVwPzEUbLe -> ../../dm-1 lrwxrwxrwx 1 0 0 9 Jul 7 03:50 lvm-pv-uuid-LHHqtf-iZtb-1LCE-poY5-39yz-qzws-S3ztY6 -> ../../md1 lrwxrwxrwx 1 0 0 11 Jul 7 03:50 md-name-city17:128TBR6 -> ../../md127 lrwxrwxrwx 1 0 0 9 Jul 7 03:50 md-name-rescue:0 -> ../../md0 lrwxrwxrwx 1 0 0 9 Jul 7 03:50 md-name-rescue:1 -> ../../md1 lrwxrwxrwx 1 0 0 9 Jul 7 03:50 md-uuid-4693467c:62fec27c:85a32b26:bd907402 -> ../../md0 lrwxrwxrwx 1 0 0 11 Jul 7 03:50 md-uuid-dc8356d5:81ede174:648f3c7f:8157ab85 -> ../../md127 lrwxrwxrwx 1 0 0 9 Jul 7 03:50 md-uuid-fd7fc965:ca629897:f7fbd288:ba4c4cbb -> ../../md1 lrwxrwxrwx 1 0 0 13 Jul 7 03:50 nvme-eui.36344630525055630025384500000001 -> ../../nvme1n1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-eui.36344630525055630025384500000001-part1 -> ../../nvme1n1p1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-eui.36344630525055630025384500000001-part2 -> ../../nvme1n1p2 lrwxrwxrwx 1 0 0 13 Jul 7 03:50 nvme-eui.36344630525055650025384500000001 -> ../../nvme0n1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-eui.36344630525055650025384500000001-part1 -> ../../nvme0n1p1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-eui.36344630525055650025384500000001-part2 -> ../../nvme0n1p2 lrwxrwxrwx 1 0 0 13 Jul 7 03:50 nvme-SAMSUNG_MZQL2960HCJR-00A07_S64FNE0R505563 -> ../../nvme1n1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-SAMSUNG_MZQL2960HCJR-00A07_S64FNE0R505563-part1 -> ../../nvme1n1p1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-SAMSUNG_MZQL2960HCJR-00A07_S64FNE0R505563-part2 -> ../../nvme1n1p2 lrwxrwxrwx 1 0 0 13 Jul 7 03:50 nvme-SAMSUNG_MZQL2960HCJR-00A07_S64FNE0R505565 -> ../../nvme0n1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-SAMSUNG_MZQL2960HCJR-00A07_S64FNE0R505565-part1 -> ../../nvme0n1p1 lrwxrwxrwx 1 0 0 15 Jul 7 03:50 nvme-SAMSUNG_MZQL2960HCJR-00A07_S64FNE0R505565-part2 -> ../../nvme0n1p2 lrwxrwxrwx 1 0 0 9 Jul 7 04:34 wwn-0x5000c500dc26f7b9 -> ../../sdd lrwxrwxrwx 1 0 0 9 Jul 7 04:34 wwn-0x5000c500dc26f878 -> ../../sdi lrwxrwxrwx 1 0 0 9 Jul 7 04:34 wwn-0x5000c500dc27013d -> ../../sde lrwxrwxrwx 1 0 0 9 Jul 7 04:34 wwn-0x5000c500dc2708ac -> ../../sdc lrwxrwxrwx 1 0 0 9 Jul 7 04:34 wwn-0x5000c500dc2783e8 -> ../../sdh lrwxrwxrwx 1 0 0 9 Jul 7 22:44 wwn-0x5000c500dc278490 -> ../../sdj lrwxrwxrwx 1 0 0 9 Jul 7 03:50 wwn-0x5000c500dc278a7c -> ../../sdb lrwxrwxrwx 1 0 0 9 Jul 7 03:50 wwn-0x5000c500dc278b23 -> ../../sda lrwxrwxrwx 1 0 0 9 Jul 7 04:34 wwn-0x5000c500dc27989a -> ../../sdf lrwxrwxrwx 1 0 0 9 Jul 7 04:34 wwn-0x5000c500dc27a271 -> ../../sdg

Lastly here is a 'blkid' output:
/dev/nvme0n1p1: UUID="4693467c-62fe-c27c-85a3-2b26bd907402" UUID_SUB="b7294d12-8d83-a358-c469-c10e4514423c" LABEL="rescue:0" TYPE="linux_raid_member" PARTUUID="679908e3-01" /dev/nvme0n1p2: UUID="fd7fc965-ca62-9897-f7fb-d288ba4c4cbb" UUID_SUB="72e5ba9c-633c-3172-2eee-4f4bb4dc0692" LABEL="rescue:1" TYPE="linux_raid_member" PARTUUID="679908e3-02" /dev/nvme1n1p1: UUID="4693467c-62fe-c27c-85a3-2b26bd907402" UUID_SUB="460c24df-42a1-f131-ba12-6439f8583189" LABEL="rescue:0" TYPE="linux_raid_member" PARTUUID="4cd07aa2-01" /dev/nvme1n1p2: UUID="fd7fc965-ca62-9897-f7fb-d288ba4c4cbb" UUID_SUB="f30e72e4-cc01-89ce-2980-d30b0a481322" LABEL="rescue:1" TYPE="linux_raid_member" PARTUUID="4cd07aa2-02" /dev/md0: UUID="60f46c67-a299-4c26-b247-533a08f5a6ef" BLOCK_SIZE="1024" TYPE="ext3" /dev/md1: UUID="LHHqtf-iZtb-1LCE-poY5-39yz-qzws-S3ztY6" TYPE="LVM2_member" /dev/sda: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="7f1a41bf-673d-d1bd-8211-c086bf79eb44" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sdb: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="c7310212-7c79-ecf3-3a07-cb1c04abcb5e" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sdc: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="b3604e36-6e70-0cc5-bc77-c318ba7b1e62" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/md127: LABEL="volcity17" UUID="a6ea8405-dc47-4db9-be90-4b8373bad82c" BLOCK_SIZE="512" TYPE="xfs" /dev/sdd: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="6443204d-234b-c158-7a74-6acfeba4a476" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sde: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="c374e7f0-a923-c962-1225-a1edccbf29ee" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sdf: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="04a59cca-eb61-864f-9e27-a39fe7815f23" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sdg: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="8f6f0668-1767-8aa6-2d2f-d03ff3ab1e22" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sdh: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="30713f85-fdb9-f3b1-e457-94fb3147d8ca" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sdi: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="8598ffc7-adfa-e505-83fb-009d28c6b9fd" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/sdj: UUID="dc8356d5-81ed-e174-648f-3c7f8157ab85" UUID_SUB="89b14c9a-8842-06b6-81e2-d449098238ed" LABEL="city17:128TBR6" TYPE="linux_raid_member" /dev/mapper/vg0-root: UUID="743c26e6-93de-40d9-88b3-52c24342b410" BLOCK_SIZE="4096" TYPE="ext3" /dev/mapper/vg0-swap: UUID="5bb0efda-0c3c-4c70-af80-8f65cff99ee9" TYPE="swap

Should I also mount the /md0, /md1, /dm to my OMV VM in order to fully bring the filesystem on the VM online, for failed RAID drive recovery?

Sorry this is loaded or may not even be support on Proxmox, but I figured I would ask if someone here has a simliar setup or situation.

Thanks!
 
I answered my own question by passingthrough the RAID array :
/sbin/qm set 102 -virtio11 /dev/disk/by-id/md-uuid-dc8356d5:81ede174:648f3c7f:8157ab85;
This got the RAID filesystem to show online when one of the disks are missing (in the case of a harddrive failure)


ALSO in order to see the RAID volume under the "RAID MGMT" tab in OMV you must passthrough disks using the 'scsi' passthrough command:
example: /sbin/qm set 102 -scsi1 /dev/disk/by-id/wwn-0x5000c500dc278b23;
NOT: /sbin/qm set 102 -virtio1 /dev/disk/by-id/wwn-0x5000c500dc278b23;
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!