Working through Proxmox Multipath

evilxyzzy

New Member
Sep 30, 2024
1
0
1
Greetings!

I've been working through setting up Proxmox in an eval situation.

Hardware setting is a couple Dell servers connected to a couple Dell ME5s via SAS with each server having a pair of SAS connections to each ME5 Array.
Multipath is up. My /dev/mapper entries show up as expected. Also as expected ( after reading prior posts on the subject ) the /dev/dm* or /dev/mapper entries do not show up in the Proxmox GUI, only the individual entries ( /dev/sdb, /dev/sdc, etc ). Multipath config is basically straight from the ME5 docs from Dell.

From the CLI I can create an LVM on /dev/dm-5 successfully, as expected I cannot do the same any of the others as they come back as "multipath component". Even after doing a pvcreate on /dev/dm-5 I still don't have anything usable showing up in the Proxmox GUI.

Is my answer that the GUI really doesn't speak Multipath at all? It looks like others have made it work but the docs on the subject seem light. I'm trying to work my way through this and get a feel for the product before we request an official eval and have a set timeline. The end goal is of course replacing our existing VMWare cluster.

I will admit that multipath on Linux is not a strength of mine. VMWare has made this all very easy in the past so have never had to work with it at this level before so quite possibly I am not asking the correct questions.

Thanks.

-Tom
 
If you have your DM paths working properly (check after reboot as well), then you just need few CLI commands to complete.
Here is a full example:

Vendor specific prefix:

Code:
bb vss provision -c 61GiB --with-disk --disk-label mpdisk --label mpdisk
== Created vss: mpdisk (VSS186D194C4076A3A3)


== VSS: mpdisk (VSS186D194C4076A3A3)
label                 mpdisk
serial                VSS186D194C4076A3A3
uuid                  3d26fcd2-8063-45e3-a807-0228691ee474
created               2024-10-02 18:31:16 -0400
status                online
current time          2024-10-02T22:31+00:00


 bb host attach -d mpdisk --multipath
===============================================================================================================
mpdisk/mpdisk attached (read-write) to pve-veeam (via 2 paths) as /dev/mapper/360a01050a682ff14196d194c40eeb1fa
===============================================================================================================

Multipath status:
Code:
multipath -ll
360a01050a682ff14196d194c40eeb1fa dm-6 B*BRIDGE,SECURE DRIVE
size=61G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 34:0:0:0 sdc 8:32 active ready running
  `- 35:0:0:0 sdd 8:48 active ready running

Create LVM Volume Group:
Code:
pvcreate /dev/dm-6
  Physical volume "/dev/dm-6" successfully created.
vgcreate mp-blockbridge /dev/dm-6
  Volume group "mp-blockbridge" successfully created

Add Proxmox storage pool:
Code:
pvesm add lvm blockbridge-lvm -vgname mp-blockbridge -content images
pvesm status
Name                   Type     Status           Total            Used       Available        %
blockbridge-lvm         lvm     active        63959040               0        63959040    0.00%



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!