Multipath with DELL MD3400 SAS

eglyn

Member
Aug 26, 2021
26
2
8
44
Hi everybody,

I try to use my DELL MD3400 as storage for my proxmox cluster but I have some issues with multipath...

my multipath.conf:

devices { device { vendor "DELL" product "MD34xx" path_grouping_policy group_by_prio prio rdac path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 } } multipaths { multipath { wwid 3600a09800064d255000005cf5ff48f16 alias lun1 } }

When I do a multipath -ll after restart the service, there is no output :/

I have a warning in the multipathd service:
systemd[1]: Starting Device-Mapper Multipath Device Controller... multipathd[154803]: --------start up-------- multipathd[154803]: read /etc/multipath.conf multipathd[154803]: failed to increase buffer size multipathd[154803]: path checkers start up systemd[1]: Started Device-Mapper Multipath Device Controller.


Am I doing something wrong ?
 
You didn't provide enough information to make a judgement as to whether anything is wrong.

Multipath does not list anything, but are there devices to list in the first place? What does iscsiadm -m session show?
lsblk, lssci, etc

First step is to ensure that devices are seen by OS, then configure multipath.
For multipath configuration I highly recommend referring to Dell documentation for your device/OS.

The buffer line is most likely a benign error for your goal right now.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I found a solution, I add the wwuid from the HDD in /etc/multipath/wwuid and now the disk is OK.

I do the same on other server, and it's OK.

But how use the same storage on the both servers ?

On the first one I create a lvm, and add it to the datastore of Proxmox. I share this lvm, and it is available on the other server.

But, if the first server go down, the second does not access to the lvm storage :/

So how can I use the storage on both servers independently ? Like on Vsphere.
 
I think that I had the same problem you have. The LVM was visible on the other hosts but not usable. Rebooting the other nodes fixed that problem.
And as was pointed out to me by bbgeek17. Using ISCSI+LVM on Proxmox cluster is not like using a shared filesystem like VMFS (ESXI, vsphere) or NFS.
It's more of a handover of a logical volume when migrating to other hosts. It works. Just not like a shared storage you are used to on vsphere.
 
Using ISCSI+LVM on Proxmox cluster is not like using a shared filesystem like VMFS (ESXI, vsphere) or NFS.
It's more of a handover of a logical volume when migrating to other hosts. It works. Just not like a shared storage you are used to on vsphere.
LVM can be used as shared block storage and you need a shared filesystem on top to have something like VMFS. For Linux there is e.g. OCFS2 for exactly that purpose.

On the first one I create a lvm, and add it to the datastore of Proxmox. I share this lvm, and it is available on the other server.
If the servers are not in cluster, you will need to manually keep track of everything. This is not a good configuration and will eventually lead to data loss. You can try to build a cluster with the two machines and a third quorum/vote device, then the LVM metainformation is always refreshed on each LVM metadata change.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!