iSCSI MPIO + HP P2000 G3

tomstephens89

Renowned Member
Mar 10, 2014
177
6
83
Kingsclere, United Kingdom
Hi guys,

So next week I am finally converting my old VMWare infrastructure to Proxmox and I have a question about iSCSI MPIO with our HP P2000 G3.

The P2000 has 2 controllers, each with 2 10Gbit NIC's. Each server has two 10Gbit NIC's, connected to separate 10Gig switches. Each controller in the P2000 has one connection to switch 1, and one to switch 2, they are separate subnets. Each server has one connection to each switch.

So, each server can see both interfaces on both controllers of the SAN, [A1,B1] + [A2,B2].

To summarise:


  • 2 separate storage networks
  • 2 interfaces per SAN controller
  • Each server has a connection to storage network 1 & storage network 2
  • SAN controller's have one interface to each network

So the question is, how do I configure this in proxmox? Looking at the MPIO article, I should add some iSCSI in the GUI first, then configure the multipath tools, then i should add an LVM group ontop of my new MPIO device.

Now do I have to enter the IP addresses of the other SAN interfaces somewhere? Or are them paths auto detected? Since so far I have only made Proxmox aware of 1 SAN interface?

Secondly, will I have to copy this MPIO config to all of the other proxmox cluster nodes manually?

Thanks
 
What do you mean by entering IP for SAN interface?

As far as Proxmox goes there is no remote provider of storage since since information is kept hidden behind the iscsi agent and LVM. All Proxmox sees is the VG and the LV's.

The MPIO configuration has to be installed manually since Proxmox does not take care of this.
 
What do you mean by entering IP for SAN interface?

As far as Proxmox goes there is no remote provider of storage since since information is kept hidden behind the iscsi agent and LVM. All Proxmox sees is the VG and the LV's.

The MPIO configuration has to be installed manually since Proxmox does not take care of this.

Thanks for the response, what I mean is that one of the configuration steps required for iSCSI is to add iSCSI through the GUI which asks for a IP address to which is can connect and discover any LUN's, correct?

Now after I have done that, the next step would be to install and configure MPIO. This is where I assume multipath-tools detects the additional paths itself as opposed to me manually feeding it the other IP addresses that the SAN has on its other interfaces?

So lets say I have done the above 2 steps, added the iSCSI target in the GUI, and configured MPIO in the CLI. Does step 1, adding the target via the GUI get synced to all other cluster nodes? In which case my next step would be to install & copy the MPIO config to the other 15 nodes. After that, using the GUI again I should create a LVM group which sits ontop of a newly made Multipath device?

Am I correct?

Tom
 
This is correct.

Excellent, I will be onto building the proxmox cluster in the middle of the week and will let you know how it goes.

Its actually quite a milestone in myself actually... Over the past 3 years I have been really into the open source & cloud tech like Proxmox, pfSense, Openstack etc....

This 16 node proxmox cluster will mark the first time I have deployed a 'large' virtualisation cluster into production which isn't VMWare vSphere. :D
 
We just need solid IPV6 support in GUI

We already work on this

and perhaps a better way of implementing fencing for HA. Like a wizard or something.

The new HA manager can do watchdog based fencing, which is very easy to configure. So it
will be much easier for beginners to start with HA.
 
We already work on this



The new HA manager can do watchdog based fencing, which is very easy to configure. So it
will be much easier for beginners to start with HA.

When you say 'new' HA, do you mean whats coming in V4 or is it new in the current 3.4?

Watchdog based HA sounds like a very good idea since thats how vSphere does it and its very very easy.

I was going to use the M1000e blade CMC method to fence using the iDRAC in the 16 blades.
 
Full IPv6 in GUI will be part of next release. To the improvement of fecing you could also add configuration of failover domain in GUI.
 
So tomorrow I will start to build the proxmox cluster. But I am still unclear on how to approach the iSCSI MPIO...

Since I have multiple paths to the storage via different subnets, when I do the first step and add iSCSI in the GUI, do I need to do this multiple times, once for each SAN IP address, or just for one of them before i configure MPIO?

Ihave attached a screenshot. What I want to know is do I have to repeat the shown step for each of the SAN's IP interfaces? (Of which mine has 4, 2 per controller.)

Screen Shot 2015-05-12 at 22.15.58.png
 
Last edited:
So tomorrow I will start to build the proxmox cluster. But I am still unclear on how to approach the iSCSI MPIO...

Since I have multiple paths to the storage via different subnets, when I do the first step and add iSCSI in the GUI, do I need to do this multiple times, once for each SAN IP address, or just for one of them before i configure MPIO?

Ihave attached a screenshot. What I want to know is do I have to repeat the shown step for each of the SAN's IP interfaces? (Of which mine has 4, 2 per controller.)

View attachment 2657

Any response on this?
 
The wiki maybe could need some clarification and perhaps improve the GUI to be multipath aware.

The wiki explains creating iscsi connections simply to configure an initiator for each path which equals discovery and connect from CLI. Once the initiators has been configured properly you turn to multipath tools to pair the two paths to a single logical path which is given a name. This name is used from CLI to create a logical PV and a logical VG. After you have created the logical VG you can access this VG from within the GUI to create LV's used for storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!