Question about ISCSI storage, redundancy, and multipathing?

surfrock66

New Member
Feb 10, 2020
15
0
1
37
We have Proxmox on our previous-generation hardware, as a secondary environment to our newer VMWare environment. Our production Compellent SAN has the following topology:
  • 2 Controllers
  • Each controller has 2 ports
  • One port on each controller is going to a non-routable 10.1.19 subnet, the other is going to a non-routable 10.1.13 subnet.
  • Each subnet has its own fiber switch for redundancy.
  • The SAN organizes those into iSCSI Fault Domains based on the subnet
  • Each fault domain has 2 virtual ports.
  • The VMWare environment hosts have 2 fiber nics which provide redundancy to the hosts.
  • The Proxmox environment hosts have 1 fiber nic, and thus only connect to the 13 fault domain and the 13 subnet.
I have created a host group on the san comprising of my proxmox hosts. I have made a volume, and have presented it to the proxmox hosts. When I perform "pvesm scan iscsi 10.1.13.###" of the SAN, I am presented with 2 iscsi targets which are the virtual ports I see for the fault domain.

Here is where I'm getting mixed up. I want to use pvesm to add the storage; my understanding is each target becomes its own storage device, so I have the following commands:

Code:
pvesm add iscsi SAN-COMPELLENT-1 --portal 10.1.13.### --target iqn.2002-03.com.compellent:###1
pvesm add iscsi SAN-COMPELLENT-2 --portal 10.1.13.### --target iqn.2002-03.com.compellent:###3

This successfully adds the devices; 2 storage devices show up in the GUI on the host. That being said, the LUN only shows up under the first storage device; if I go to make an LVM volume group and volume, and choose the base storage device, a LUN only shows up under the base volume of one device, not the 2nd.

While I've seen references to multiple targets going to the same storage device, the following command doesn't work as the storage device only shows the last "target" under "Path/Target" and the storage isn't available; the documentation on pvesm doesn't indicate anything about multiple target params:

Code:
pvesm add iscsi SAN-COMPELLENT-1 --portal 10.1.13.### --target iqn.2002-03.com.compellent:###1 --target iqn.2002-03.com.compellent:###3

Can I get a sanity check for this architecture?
  • What is the correct way to connect this LUN; one storage device with 2 iscsi targets, or 2 storage devices with 1 iscsi target each?
  • If it's the latter, should the LUN show up under both iscsi targets/storage devices?
  • Is the solution to making the lun show up something to do with multipathing; I am assuming since I only have 1 nic on the server I do NOT use multipathing here, or am I misunderstanding that?
  • If I am correct in setting up both storage devices separately, my fault domains acting active/passive with the virtual ports; if a failover situation happens and a LUN with the same WWID shows up under the other storage device, will Proxmox see that seamlessly?
 

bbgeek17

Member
Nov 20, 2020
138
23
18
www.blockbridge.com
There are several layers of redundancy that are involved in enterprise SAN connection:
- port/path (nic/cable/switch)
- controller

Keep in mind that Compellent is not Active/Active for a given LUN. The LUN "belongs" to one of the two controllers and will only be seen via network path on that controller. If the controller fails, then the LUN will be taken over by the "standby" controller.
You can, and I think its done somewhat automatically, spread multiple LUNs across both controllers. That way they are both working for you. However keep in mind that if both controllers are busy at 51% load, then when one fails - it will have to handle 102%. Which may lead to upset expectations.

So, seeing the LUN only through single Target is expect.

You said that you only have single path from Proxmox to SAN, that means you dont have ability to do multipathing. Should you have a failure anywhere in the path between SAN and Proxmox you will loose access to storage and there will not be an automatic failover to the other Controller.

SAN/multipath configuration is not Proxmox specific. You should check the many compellent related guides/howtos/etc as it relates to your environment (single path, Linux OS) and follow those guides.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

surfrock66

New Member
Feb 10, 2020
15
0
1
37
Ok, that's good, that is what I thought I just wanted to be sure.

So, if I add the storage through 2 "storage devices" and I see the LUN through one of those targets, do I have to do anything in the case of failover? The lun will just show up under the other device for the other target, and proxmox will see it, understand it, and make sure it's available to my guests?
 

bbgeek17

Member
Nov 20, 2020
138
23
18
www.blockbridge.com
The best way is to test it. Make the LUN available to Proxmox, then manually move it the other controller.

The disk UUID will remain the same, the LVM (if you use that) will remain the same. Could there be some other dependencies at the app layer that will trip you up? Possibly.

Even with single path you should look into setting up multipathd, specifically look into ALUA related config. If you set it up properly, then the "swap" will happen at a lower layer and to the app/PVE it will remain the same /dev/mpath device.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!