mdadm array - How to use as storage?

MGSteve

New Member
Nov 1, 2023
15
0
1
Ok, Ok, I know from reading copious threads, mdadm is not officially supported, however I have a good reason for choosing it.

I've got an Dell R640 with 7 NVMe drives and BOSS card with 2xM2 SSD for the OS. The R640 have no hardware RAID support for the NVMe drives and only RAID-1 support from the onboard software RAID under Linux.

I don't want RAID1, so access is via the AHCI driver.

If I benchmark 1 drive on it's own, under Ubuntu 22.04 I get a read of around 2g/s and write of 3g/s. On a Raid 10 array of 6 drives & 1 spare, I gets 5gb/s read & write.

Under ZFS ZRaid2 under Proxmox I get around 2g/s read & write. This is a massive performance hit that I do not want to take, hence the use of mdadm.

I've setup the mdadm array in the console on the Proxmox server, but how do I get Proxmox to show it as a storage location?

I'm not interested in ZFS, so please don't reply saying to use that instead. Once you add a VM, the benchmark speed using ssd-benchmark was 300mb/s! Dire.

I know I can reinstall using debian and install onto the Raid array, which I'd rather not do as it makes the BOSS card useless and I like the idea of separating the OS from the VM data.

I have the array mounted in /mnt/vm_store, but can of course move it to a better location.

Regards

Steve.
 
Once you have md device, you have two choices:
a) Create an LVM based storage object for PVE
b) Create a Directory based storage object for PVE

https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)
https://pve.proxmox.com/wiki/Storage:_Directory

One is a block storage, the other is file based. There are many articles on pros and cons for each. Just use /dev/md as target.
If you decide to go for LVM, you will need to get rid of the mount and filesystem you placed on it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: slidermike