Help with setting up iSCSI target with multiple IPs on Proxmox

ITxD

New Member
Jul 19, 2023
8
0
1
Hi,

We have installed a new storage HPE MSA 2060. the storage server has two controllers A&B and each controller has 4 nics(8 nics in total). and each NIC is configured with an IP. so the two controllers have 8 IPs in total.
We're are trying to connect a proxmox cluster to the storage. but not sure which IP of the storage's 8 IPs should we use.

Proxmox allows us to only add 1 ip for the portal. we can't use just one IP becasue we need to have redundancy.

I have attached a diagram detailing the setup.

Let me know if you have any questions.
 

Attachments

  • storage diagram.jpg
    storage diagram.jpg
    77.7 KB · Views: 19
Btw. iscsi hw config is the slowest and even complicatest one as of even FC and then fastest and even easierst sas but sas just supports 4 hosts redundant.
Normally that is resolved by multipath (mpath) config on the storage client so which are your pve's --> debian mpath setup.
Inside the msa you define disks to a pool asigned to ctrl. A (and other to B) from them define virtual disks which were mapped to ports.
When ctrl. A goes down that v-disk is overtaken by B with his own port.
 
We're are trying to connect a proxmox cluster to the storage. but not sure which IP of the storage's 8 IPs should we use.

Proxmox allows us to only add 1 ip for the portal. we can't use just one IP becasue we need to have redundancy.
While you can only setup one IP in the storage pool configuration, the other IPs will be discovered by PVE automatically. Thats part of the iSCSI discovery and connectivity process (the IPs (portals) are returned in the iSCSI Target information).

You will need to multipath to properly take advantage of all the IPS.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
If you have it setup correctly, it should look something like:
# multipath -ll
3600c0ff000f995ddc4f7056601000000 dm-5 DellEMC,ME5
size=9.1T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 18:0:0:1 sde 8:64 active ready running
| |- 17:0:0:1 sdd 8:48 active ready running
| |- 16:0:0:1 sdc 8:32 active ready running
| `- 22:0:0:1 sdi 8:128 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 15:0:0:1 sdb 8:16 active ready running
|- 21:0:0:1 sdf 8:80 active ready running
|- 19:0:0:1 sdg 8:96 active ready running
`- 20:0:0:1 sdh 8:112 active ready running

As mentioned by bbgeek17, once you configure one IP, it will auto-discover the other 7 (if it doesn't they will not show as active ready running all under one tre).
There is some info here on setting up multipath: https://pve.proxmox.com/wiki/ISCSI_Multipath
 
While you can only setup one IP in the storage pool configuration, the other IPs will be discovered by PVE automatically. Thats part of the iSCSI discovery and connectivity process (the IPs (portals) are returned in the iSCSI Target information).

You will need to multipath to properly take advantage of all the IPS.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks. That's exactly the answer I was looking for.
I have another question. The server can now see all the 8 IPs of the iscsi server. And multipath is configured and mapped the drive and configured LVM. Now what happens if controller-A goes offline (portal IP belongs to Controller-A)? I can see from TCPDUMP that the host is communicating with the 8 IPs of the controllers already. Do I need to add another ISCSI target for Controller-B in order to sync with proxmox if controller-A goes down?

Sorry I have a little knowledge in this area.

Thank you.
 
If you have it setup correctly, it should look something like:
# multipath -ll
3600c0ff000f995ddc4f7056601000000 dm-5 DellEMC,ME5
size=9.1T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 18:0:0:1 sde 8:64 active ready running
| |- 17:0:0:1 sdd 8:48 active ready running
| |- 16:0:0:1 sdc 8:32 active ready running
| `- 22:0:0:1 sdi 8:128 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 15:0:0:1 sdb 8:16 active ready running
|- 21:0:0:1 sdf 8:80 active ready running
|- 19:0:0:1 sdg 8:96 active ready running
`- 20:0:0:1 sdh 8:112 active ready running

As mentioned by bbgeek17, once you configure one IP, it will auto-discover the other 7 (if it doesn't they will not show as active ready running all under one tre).
There is some info here on setting up multipath: https://pve.proxmox.com/wiki/ISCSI_Multipath
Yes. It's looking now like that after configuring the multipathing.
Do I need to add another ISCSI target for Controller-B in order to sync with proxmox cluster if controller-A goes down?
 
You only need 4 paths for a correct multipath setup. 2 IP subnets with one Nic each on the PVE, each connected to one switch and one port per controller per switch. The storage then has 4 ports occupied.
You can also use all 8 ports, but more paths are not supported.
If Multipath is running, you can make an LVM PV from each Multipath device and then assign it to a datastore (LVM) in the GUI.
 
  • Like
Reactions: ITxD
You only need 4 paths for a correct multipath setup. 2 IP subnets with one Nic each on the PVE, each connected to one switch and one port per controller per switch. The storage then has 4 ports occupied.
You can also use all 8 ports, but more paths are not supported.
If Multipath is running, you can make an LVM PV from each Multipath device and then assign it to a datastore (LVM) in the GUI.
Not sure what you mean that more paths are not supported. In my testing, and watching stats on the switch all 8 paths on are supported. That said, only 4 are active at a time per volume, and if the controller fails (or is down for firmware upgrade, etc), then the other 4 paths are used. The paths are active/standby between the two controllers.

Note: Not all iscsi SANs behave the same. Some are active/passive, and some have floating IPs between controllers. Arrays like the Dell ME5 and MSA 2060 use ALUA for active/passive, and can setup a second volume with the active/passive switched on which controller the volume is active on.
 
Yes. It's looking now like that after configuring the multipathing.
Do I need to add another ISCSI target for Controller-B in order to sync with proxmox cluster if controller-A goes down?
If you are seeing all 8 from the multipath -ll then you should be all set. You can verify it already has all the IPs for both controllers by running:
grep "node.conn.*address" /etc/iscsi/nodes/*/*/default
 
  • Like
Reactions: ITxD
Not sure what you mean that more paths are not supported.
HPE does not support more than 8 paths. This is the case with most storage manufacturers.
In my testing, and watching stats on the switch all 8 paths on are supported. That said, only 4 are active at a time per volume, and if the controller fails (or is down for firmware upgrade, etc), then the other 4 paths are used. The paths are active/standby between the two controllers.

Note: Not all iscsi SANs behave the same. Some are active/passive, and some have floating IPs between controllers. Arrays like the Dell ME5 and MSA 2060 use ALUA for active/passive, and can setup a second volume with the active/passive switched on which controller the volume is active on.
Almost all storages (except Enterprise) use ALUA. The last time I saw floating IPs was with Equallogic, but they are all EOL.

Of course the 8 paths work, but many systems support up to 256 LUNs and up to 1024 paths. If you use 8 paths, you can use a maximum of 128 LUNs. Only a few systems get into this range, but there are some limitations.
 
  • Like
Reactions: ITxD
I have another question. The server can now see all the 8 IPs of the iscsi server. And multipath is configured and mapped the drive and configured LVM. Now what happens if controller-A goes offline
Generally when an SP goes offline, the LUN ownership is transferred by the storage system to the second SP. The iSCSI connection is already established to the second SP. The connection to failed SP is marked down and multipath software automatically takes care of it on the client side.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: ITxD
Generally when an SP goes offline, the LUN ownership is transferred by the storage system to the second SP. The iSCSI connection is already established to the second SP. The connection to failed SP is marked down and multipath software automatically takes care of it on the client side.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Great! Thanks a lot for your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!