Working through Proxmox Multipath

evilxyzzy

New Member
Sep 30, 2024
3
0
1
Greetings!

I've been working through setting up Proxmox in an eval situation.

Hardware setting is a couple Dell servers connected to a couple Dell ME5s via SAS with each server having a pair of SAS connections to each ME5 Array.
Multipath is up. My /dev/mapper entries show up as expected. Also as expected ( after reading prior posts on the subject ) the /dev/dm* or /dev/mapper entries do not show up in the Proxmox GUI, only the individual entries ( /dev/sdb, /dev/sdc, etc ). Multipath config is basically straight from the ME5 docs from Dell.

From the CLI I can create an LVM on /dev/dm-5 successfully, as expected I cannot do the same any of the others as they come back as "multipath component". Even after doing a pvcreate on /dev/dm-5 I still don't have anything usable showing up in the Proxmox GUI.

Is my answer that the GUI really doesn't speak Multipath at all? It looks like others have made it work but the docs on the subject seem light. I'm trying to work my way through this and get a feel for the product before we request an official eval and have a set timeline. The end goal is of course replacing our existing VMWare cluster.

I will admit that multipath on Linux is not a strength of mine. VMWare has made this all very easy in the past so have never had to work with it at this level before so quite possibly I am not asking the correct questions.

Thanks.

-Tom
 
If you have your DM paths working properly (check after reboot as well), then you just need few CLI commands to complete.
Here is a full example:

Vendor specific prefix:

Code:
bb vss provision -c 61GiB --with-disk --disk-label mpdisk --label mpdisk
== Created vss: mpdisk (VSS186D194C4076A3A3)


== VSS: mpdisk (VSS186D194C4076A3A3)
label                 mpdisk
serial                VSS186D194C4076A3A3
uuid                  3d26fcd2-8063-45e3-a807-0228691ee474
created               2024-10-02 18:31:16 -0400
status                online
current time          2024-10-02T22:31+00:00


 bb host attach -d mpdisk --multipath
===============================================================================================================
mpdisk/mpdisk attached (read-write) to pve-veeam (via 2 paths) as /dev/mapper/360a01050a682ff14196d194c40eeb1fa
===============================================================================================================

Multipath status:
Code:
multipath -ll
360a01050a682ff14196d194c40eeb1fa dm-6 B*BRIDGE,SECURE DRIVE
size=61G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 34:0:0:0 sdc 8:32 active ready running
  `- 35:0:0:0 sdd 8:48 active ready running

Create LVM Volume Group:
Code:
pvcreate /dev/dm-6
  Physical volume "/dev/dm-6" successfully created.
vgcreate mp-blockbridge /dev/dm-6
  Volume group "mp-blockbridge" successfully created

Add Proxmox storage pool:
Code:
pvesm add lvm blockbridge-lvm -vgname mp-blockbridge -content images
pvesm status
Name                   Type     Status           Total            Used       Available        %
blockbridge-lvm         lvm     active        63959040               0        63959040    0.00%



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Thanks for the detailed reply. This worked perfectly. Next up is attempting to create an ISO repository and then using that to install an OS on the new disks. Still reading docs on this.

Thanks again!
 
Hardware setting is a couple Dell servers connected to a couple Dell ME5s via SAS with each server having a pair of SAS connections to each ME5 Array.
It's not quiet clear yet if you have 1 server for 1 ME5 and that a few times or eg. up to 4 server with 4 ME5 while each of the 4 server is connected to each ME5 also so that every defined volume of any raidset of any ME5 could be (if wanted and mapped) seen on any of your servers :)
As it's an eval situation I think these Dell (and HP and ...) labeled Seagate systems weren't purchased yet. The firmware possibilities for raidset scrub timing should be set offline while bad and could be better done by host cronjob with all it's freedom by batched ssh cmd's (as all the other cli internal cmd's when wished) :) This are just some infos to your brainstorming ...
 
1727987779270.png
"I am facing a similar issue with duplicate disks having the same WWID while using SAN storage. How should I configure multipath.conf? I have tried several settings, including using mpath_member, but nothing has worked so far
 
How should I configure multipath.conf?
the presence of "mpath member" indicates that multipath is possibly working. What is the output of:
multipath -ll

The presence of "duplicate" devices is expected.
I have tried several settings, including using mpath_member, but nothing has worked so far
What exactly are you referring to here? Have you already reviewed reply #2 above, specifically last three steps


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm starting from scratch, and I've reset everything to its initial state.

There is no multipath.conf file; I haven't created one yet. On the GUI side, I see the disks with the same WWID.

When I run multipath -ll, nothing shows up, as there is no configuration file.

The output of multipath -v3 is as follows. What I want to achieve is to not see the duplicate disks that have the same WWID.

How should I configure the multipath.conf? I'm seeing the same disk with different WWID numbers 4 times, but it's actually a single disk.



root@PVE01:~# multipath -ll -v3
977.820690 | set open fds limit to 1048576/1048576
977.820738 | loading /lib/multipath/libchecktur.so checker
977.820814 | checker tur: message table size = 3
977.820821 | loading /lib/multipath/libprioconst.so prioritizer
977.820889 | _init_foreign: foreign library "nvme" is not enabled
977.824431 | sda: size = 1874329600
977.824488 | sda: vendor = BROADCOM
977.824501 | sda: product = MR9560-16i
977.824512 | sda: rev = 5.22
977.825020 | sda: h:b:t:l = 0:3:111:0
977.825115 | sda: tgt_node_name =
977.825121 | sda: uid_attribute = ID_SERIAL (setting: multipath internal)
977.825123 | sda: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
977.825254 | sda: 51135 cyl, 255 heads, 63 sectors/track, start at 0
977.825260 | sda: vpd_vendor_id = 0 "undef" (setting: multipath internal)
977.825271 | sda: serial = 00c7765a57ab114a2dc0333115b26200
977.825276 | sda: detect_checker = yes (setting: multipath internal)
977.825394 | sda: path_checker = tur (setting: multipath internal)
977.825399 | sda: checker timeout = 90 s (setting: kernel sysfs)
977.825444 | sda: tur state = up
977.825587 | sdb: size = 8589934592
977.825644 | sdb: vendor = HUAWEI
977.825656 | sdb: product = S6800T
977.825666 | sdb: rev = 6000
977.826159 | sdb: h:b:t:l = 22:0:2:1
977.826326 | sdb: tgt_node_name = 0x2100c469f05ec050
977.826331 | sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
977.826332 | sdb: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
977.826695 | sdb: 0 cyl, 64 heads, 32 sectors/track, start at 0
977.826701 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
977.826714 | sdb: serial = 2102353SYSFSNA1000070026
977.826718 | sdb: detect_checker = yes (setting: multipath internal)
977.826857 | loading /lib/multipath/libcheckrdac.so checker
977.826941 | checker rdac: message table size = 9
977.826945 | sdb: path_checker = rdac (setting: storage device autodetected)
977.826961 | sdb: checker timeout = 30 s (setting: kernel sysfs)
977.827071 | rdac checker failed to set TAS bit
977.827124 | sdb: rdac state = up
977.827183 | sdc: size = 8589934592
977.827238 | sdc: vendor = HUAWEI
977.827250 | sdc: product = S6800T
977.827260 | sdc: rev = 6000
977.827814 | sdc: h:b:t:l = 22:0:3:1
977.827970 | sdc: tgt_node_name = 0x2100c469f05ec050
977.827974 | sdc: uid_attribute = ID_SERIAL (setting: multipath internal)
977.827976 | sdc: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
977.828200 | sdc: 0 cyl, 64 heads, 32 sectors/track, start at 0
977.828206 | sdc: vpd_vendor_id = 0 "undef" (setting: multipath internal)
977.828220 | sdc: serial = 2102353SYSFSNA1000070026
977.828225 | sdc: detect_checker = yes (setting: multipath internal)
977.828371 | sdc: path_checker = rdac (setting: storage device autodetected)
977.828391 | sdc: checker timeout = 30 s (setting: kernel sysfs)
977.828490 | rdac checker failed to set TAS bit
977.828544 | sdc: rdac state = up
977.828599 | sdd: size = 4294967296
977.828654 | sdd: vendor = HUAWEI
977.828666 | sdd: product = S6800T
977.828677 | sdd: rev = 6000
977.829241 | sdd: h:b:t:l = 22:0:3:2
977.829379 | sdd: tgt_node_name = 0x2100c469f05ec050
977.829384 | sdd: uid_attribute = ID_SERIAL (setting: multipath internal)
977.829385 | sdd: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
977.829694 | sdd: 0 cyl, 64 heads, 32 sectors/track, start at 0
977.829700 | sdd: vpd_vendor_id = 0 "undef" (setting: multipath internal)
977.829714 | sdd: serial = 2102353SYSFSNA1000070042
977.829718 | sdd: detect_checker = yes (setting: multipath internal)
977.829837 | sdd: path_checker = rdac (setting: storage device autodetected)
977.829857 | sdd: checker timeout = 30 s (setting: kernel sysfs)
977.829976 | rdac checker failed to set TAS bit
977.830030 | sdd: rdac state = up
977.830089 | sde: size = 8589934592
977.830144 | sde: vendor = HUAWEI
977.830155 | sde: product = S6800T
977.830166 | sde: rev = 6000
977.830651 | sde: h:b:t:l = 24:0:1:1
977.830803 | sde: tgt_node_name = 0x2100c469f05ec050
977.830807 | sde: uid_attribute = ID_SERIAL (setting: multipath internal)
977.830809 | sde: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
977.831013 | sde: 0 cyl, 64 heads, 32 sectors/track, start at 0
977.831021 | sde: vpd_vendor_id = 0 "undef" (setting: multipath internal)
977.831036 | sde: serial = 2102353SYSFSNA1000070026
977.831040 | sde: detect_checker = yes (setting: multipath internal)
977.831145 | sde: path_checker = rdac (setting: storage device autodetected)
977.831166 | sde: checker timeout = 30 s (setting: kernel sysfs)
977.831288 | rdac checker failed to set TAS bit
977.831360 | sde: rdac state = up
977.831416 | sdf: size = 8589934592
977.831470 | sdf: vendor = HUAWEI
977.831482 | sdf: product = S6800T
977.831493 | sdf: rev = 6000
977.831957 | sdf: h:b:t:l = 24:0:3:1
977.832162 | sdf: tgt_node_name = 0x2100c469f05ec050
977.832166 | sdf: uid_attribute = ID_SERIAL (setting: multipath internal)
977.832168 | sdf: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
977.832406 | sdf: 0 cyl, 64 heads, 32 sectors/track, start at 0
977.832412 | sdf: vpd_vendor_id = 0 "undef" (setting: multipath internal)
977.832427 | sdf: serial = 2102353SYSFSNA1000070026
977.832431 | sdf: detect_checker = yes (setting: multipath internal)
977.832548 | sdf: path_checker = rdac (setting: storage device autodetected)
977.832569 | sdf: checker timeout = 30 s (setting: kernel sysfs)
977.832668 | rdac checker failed to set TAS bit
977.832723 | sdf: rdac state = up
977.832778 | sdg: size = 4294967296
977.832834 | sdg: vendor = HUAWEI
977.832845 | sdg: product = S6800T
977.832857 | sdg: rev = 6000
977.833441 | sdg: h:b:t:l = 24:0:3:2
977.833585 | sdg: tgt_node_name = 0x2100c469f05ec050
977.833590 | sdg: uid_attribute = ID_SERIAL (setting: multipath internal)
977.833591 | sdg: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
977.833762 | sdg: 0 cyl, 64 heads, 32 sectors/track, start at 0
977.833767 | sdg: vpd_vendor_id = 0 "undef" (setting: multipath internal)
977.833782 | sdg: serial = 2102353SYSFSNA1000070042
977.833785 | sdg: detect_checker = yes (setting: multipath internal)
977.833892 | sdg: path_checker = rdac (setting: storage device autodetected)
977.833913 | sdg: checker timeout = 30 s (setting: kernel sysfs)
977.834020 | rdac checker failed to set TAS bit
977.834074 | sdg: rdac state = up
977.834115 | loop0: device node name blacklisted
977.834149 | loop1: device node name blacklisted
977.834180 | loop2: device node name blacklisted
977.834210 | loop3: device node name blacklisted
977.834239 | loop4: device node name blacklisted
977.834268 | loop5: device node name blacklisted
977.834298 | loop6: device node name blacklisted
977.834328 | loop7: device node name blacklisted
977.834360 | dm-0: device node name blacklisted
977.834391 | dm-1: device node name blacklisted
977.834422 | dm-2: device node name blacklisted
977.834453 | dm-3: device node name blacklisted
977.834483 | dm-4: device node name blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
0:3:111:0 sda 8:0 -1 undef undef BROADCOM,MR9560-16i unknown
22:0:2:1 sdb 8:16 -1 undef undef HUAWEI,S6800T unknown
22:0:3:1 sdc 8:32 -1 undef undef HUAWEI,S6800T unknown
22:0:3:2 sdd 8:48 -1 undef undef HUAWEI,S6800T unknown
24:0:1:1 sde 8:64 -1 undef undef HUAWEI,S6800T unknown
24:0:3:1 sdf 8:80 -1 undef undef HUAWEI,S6800T unknown
24:0:3:2 sdg 8:96 -1 undef undef HUAWEI,S6800T unknown
977.836897 | multipath-tools v0.9.4 (12/19, 2022)
977.836909 | libdevmapper version 1.02.185
977.837011 | kernel device mapper v4.48.0
977.837026 | DM multipath kernel driver v1.14.0
977.837385 | unloading rdac checker
977.837411 | unloading tur checker
977.837426 | unloading const prioritizer
root@PVE01:~#
 
How should I configure the multipath.conf? I'm seeing the same disk with different WWID numbers 4 times, but it's actually a single disk.
The best approach is to find your SAN vendor's specific recommendations.

The next best approach is to review PVE Wiki: https://pve.proxmox.com/wiki/ISCSI_Multipath
Even though its iSCSI, the multipath configuration is somewhat standard.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
ISO requires File storage. You have many options. If you plan to share this location across multiple nodes - NFS is your primary and most logical choice.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Yupp. I was able to set up a single-node-only ISO store which is good enough for this eval.

I have not successfully been able to grow a disk yet. I'll start a new thread on that topic once I have exhausted the docs as it's wandering away from being Multipath related.

Thanks again.
 
What I want to achieve is to not see the duplicate disks that have the same WWID.

In my limited Proxmox experience, this is totally possible. I wasn't able to find a clear answer on how to properly setup iSCSI MPIO similar to how I'm used to it working in VMware ESXi. After a number of hours, I was able to piece together the right settings. To help others, I created a quick guide (attached) on how to add iSCSI devices and setup MPIO properly so you only see (1) storage device per WWID.
 

Attachments

  • Proxmox-Configuring iSCSI Multipath.pdf
    337.6 KB · Views: 2
Last edited:
  • Like
Reactions: waltar

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!