[SOLVED] Help with multipath with 1 x DS4246 and 2 x PVE node ? Is it even a thing ?

iouej

Member
Aug 28, 2018
10
1
23
36
I am building a Proxmox cluster and I want to get the most HA from my hardware.
For the Proxmox cluster I have 3 x R720 and 2 of them have a LSI 9200-8e connected for the common storage to a DS4246 with 2 IOM modules. I have 4 cables connecting 2 x R720 to the DS4246 as in the following schema :
1643760719700.png
I would like to create a ZFS pool out of the DS4246 taking advantage of this multipath / HA connectivity in Proxmox.
So at the end I would like to see one ZFS pool in my Proxmox cluster with 1 active connection and 3 failover connections, or even better to get 2 active connections (one for each node) and 2 failover connections (one for each node) or, the best in a crazy geek world, to get a lot of bandwith 4 active connections which could switch to 3-2-1 connections if something happens in some kind of LACP for storage (I don'tknow if the last wish makes sense).

At the moment, I can't find the keywords to RTFM and set up this idea. What am I looking for ? If before any RTFM you have an advice I'd be happy to read it !
Thank you !

Edit 1 & 2 : typos
 
Last edited:
What you are looking for is "DS4246 multipath linux". 3rd and 4th links appear what you need to get started, i.e. https://docs.netapp.com/ontap-9/top...UID-A46327AC-5A5E-459F-87E0-B25B0D83F52B.html

Few things to keep in mind:
- The disks in your storage are already RAID'ed, probably R5 or R6. Common approach is that you should never run ZFS on top of RAID
- The RAID in your array is probably carved into LUNs. The LUN "belongs"/serviced by only one side actively. Its not accessible for active read/write from both modules, most likely. Which means you will get Active/Passive connection from each server to each LUN.
- If you create a few LUNs and spread them out across modules, then you might be able to take advantage of both paths (one per LUN) at a time.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you for your answer! I am not sure to understand your proposition, it seems that the solution you linked is for a SAN however my configuration is a DAS the DS4246 being just a JBOD.
Is my remark stupid considering that I only have little knowledge in this ?
 
The article may be iSCSI centric, however multipath does not care what the transport protocol is. As long as it sees two disks with the same ID via multiple paths/bus/etc - it will be able to create a single representation of those disks for the application to consume.
I suspect if you search Netapp documentation hard enough you will find a more DAS specific article. But even this one should give you a starting point in your configuration.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
We use NetApp at work and my understanding is that the multi-path connection from the Disk Shelf to Controller is just for failover - certainly in the admin console the drives are managed by only one of the controllers at a time. I think the software runs a heartbeat monitor and a takeover is initiated if the master controller goes down. I know it works this way during firmware updates.

I'm not a NetApp expert so there may be other ways to set these systems up but this is my experience. There is also seems to be very little technical documentation available to non-partners and I've seen little on any home-brew setups apart from using them as JBOD disk shelves.

I think the best you can hope for is that two or more systems can simultaneously see all of the drives (I've only ever connected a NetApp disk shelf
to a single (non-NetApp) host but I would assume that is what would happen) and then you could have host1 managing drives 1-12 while host2 could manage drives 13-24. Then if you needed to failover for maintenance you could export the host pool from the current host and then import it on the other host.

I'd be interested to know how you get on, and let us know if you discover anything interesting along the way
 
Yes, as I mentioned in my first reply : only one controller is active for a given LUN. However if you have two LUNs you should be able to assign them to different controllers, with partner being the standby for that particular LUN.

The terminology you are looking for is ALUA: https://library.netapp.com/ecmdocs/ECMP1196995/html/GUID-D22C5B1F-060E-4060-A62A-5565AFF2D247.html

As I mentioned the multipathing is not active/active but rather active/standby. In most cases the controller initiates the failover and vendor proprietary drives are needed for host to do it.

Linux MP knows how to handle ALUA devices. The idea is to mask the device path/name change so the app is only aware of /dev/dmX.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you for all your inputs, I'm starting to wrap things around when reading articles now. I still need more RTFM before my first test but going there!
Thank you for your help!

Edit : @bbgeek17 rereading your first post, you mentioned the possible presence of a raided configuration on the disks as if I had to delete this configuration. My understanding was that such configuration will live on raid card or in the OS itself but not in the JBOD. Do I need to clear a configuration somewhere else than on the raid card (which are freshly flash to it mode) or the OS (which is freshly installed) ?
 
Last edited:
If this is just a disk shelf and not a filer (it appears from a quick glance that its just a shelf), then there is no storage/raid management, you are correct.
I am used to dealing with complete systems, so my thought train was about those. You should be ok if you are seeing every disk directly after connectivity is established.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!