Multipath iSCSI, LVM and Cluster

philister

Member
Jun 18, 2012
31
0
6
Hello all,

we've been playing around with Proxmox for a couple of months now (since 2.0) and love it for it's simplicity and great out-of-the-box experience. Now, however, I think we'vo come to a point where what we need cannot be accomplished on the GUI. But we need this to be sorted out before we move to the data center.

As we want and need both live migration and online backup, we went for LVM on shared iSCSI storage. This is easily done with a simple iSCSI target I set up with Debian for testing purposes. In the data center, however, we'll be using an HP EVA P6000 SAN storage array, which uses multipath. I have this thing in my Lab right now and try to get it running with my proxmox cluster. I followed the tutorial here http://pve.proxmox.com/wiki/ISCSI_Multipath but still have some questions:

Are the following assumptions correct?


  1. The steps described in the tutorial (installation and configuration of multipath-tools, iSCSI discovery and login) must be done on each node in the cluster
  2. The LVM physical volume and volume group must be created on the CLI, this cannot be done on the GUI
  3. The PV and VG must be created on the multipath device that represents the multiple "normal" devices (for example /dev/dm-3)
  4. Step 3 must only be done once, on one node in the cluster
  5. I can then create an LVM group in the GUI, chosing my previously created VG by selecting "existing volume groups" in the base storage field
  6. With multipath iSCSI, I never add an iSCSI target on the GUI. I do the iSCSI and LVM stuff on the CLI and then add the existing LVM group through the GUI
  7. I have to repeat step 1 with every node I add to the cluster

Thank you very much for your confirmations and/or corrections.

Best regards,
 
Hello philister,

We find ourselves facing the same question you have posted here and could not find any answer anywhere online. May we ask if you have gotten any answer to the above questions yet or have successfully setup this configuration.

We use Dell Powervault MD3620i SAN and have successfully connected a multipath iSCSI target to one Proxmox node. However, we are unsure how to set this up in a cluster configuration. Hope you can share your knowledge and experience on this.

Thanks!
 
It turned out that my assumptions above (1-7) were correct. I'll post my more detailed notes, but they're in german. I can say that iSCSI multipath in a proxmox cluster works very well, at least with our storage array which is a HP EVA.


Anlegen eines iSCSI/LVM-Storage (Multipath)


  • Multipath Tools installieren: aptitude install multipath-tools
  • In der Datei /etc/iscsi/iscsi.conf folgende Parameter ändern:
Code:
node.startup = automatic           
node.session.timeo.replacement_timeout = 15


  • iSCSI Target nach LUNs durchsuchen und LUNs einbinden
    • entweder iscsiadm --mode discovery --type sendtargets --login --portal 10.0.XXX.XXX
    • oder iscsiadm -m discovery -t st -l -p 10.0.XXX.XXX
    • In EVA Command View
      • Den neuen Host anlegen (Add HostiSCSI, den iSCSI Node Name des Hosts findet man unter /etc/iscsi/initiatorname.iscsi)
      • Die Vdisks dem neuen Host freigeben (present)
    • Das iscsiadm Kommando von oben wiederholen
  • Proxmox Node rebooten (TODO: Alternative hierfür herausfinden)
  • Prüfe neue Devices: multipath -v3 - bei paths list sieht man die wwid (uuid) der neuen Devices. Mehrere neue Devices entsprechen derselben LUN (sieht man an gleicher wwid), daher Multipath.
  • Neue LUN in Multipath-Config (/etc/multipath.conf) aufnehmen

Code:
  defaults {
                       polling_interval        2
                       selector                "round-robin 0"
                       path_grouping_policy    multibus
                       getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
                       rr_min_io               100
                       failback                immediate
                       no_path_retry           queue
               }

               blacklist {
                       wwid *
               }

               blacklist_exceptions {
                       wwid "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
                       wwid "wwid-von-neuer-lun"
               }         

               multipaths {
                 multipath {
                       wwid "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
                       alias vdisk001
                 }
                 multipath {
                       wwid "wwid-von-neuer-lun"
                       alias vdisk002
                 }
               }


  • Multipath-Service neu starten: service multipath-tools reload, dann service multipath-tools restart
  • Multipath Status abfragen: multipath -ll - die Ausgabe muss so oder so ähnlich aussehen (kann etwas dauern, evtl. Service nochmal neustarten):

Code:
vdisk001 (XXXXXXXXXXXXXXXXXXXXXXXXXXXXX) dm-3 HP,HSV340
               size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
               `-+- policy='round-robin 0' prio=8 status=active
                 |- 3:0:0:1  sdc 8:32  active ready  running
                 |- 1:0:0:1  sdb 8:16  active ready  running
                 |- 2:0:0:1  sdd 8:48  active ready  running
                 |- 4:0:0:1  sde 8:64  active ready  running
                 |- 12:0:0:1 sdf 8:80  active ready  running
                 |- 15:0:0:1 sdg 8:96  active ready  running
                 |- 11:0:0:1 sdh 8:112 active ready  running
                 `- 16:0:0:1 sdi 8:128 active ready  running


  • LVM physical volume (PV) und volume group (VG) auf dem multipath device (in diesem Fall /dev/dm-3) anlegen (muss nur einmal, auf einem Node im Cluster gemacht werden!)
    • PV anlegen: pvcreate /dev/dm-3
    • VG anlegen: vgcreate san1-vdisk001 /dev/dm-3
  • LVM group über Proxmox GUI hinzufügen (muss nur einmal, auf einem Node im Cluster gemacht werden!)
    • Datacenter >> Storage >> Add >> LVM group
    • Bei Base Storage den Eintrag Existing volume groups auswählen
    • Bei Volume group die zuvor auf der Command Line erstellte VG auswählen
    • Enable und Shared aktivieren >> Add
 
Last edited:
Philster,

Thank you for sharing.

It turned out that my assumptions above (1-7) were correct. I'll post my more detailed notes, but they're in german. I can say that iSCSI multipath in a proxmox cluster works very well, at least with our storage array which is a HP EVA.


Anlegen eines iSCSI/LVM-Storage (Multipath)


  • Multipath Tools installieren: aptitude install multipath-tools
  • In der Datei /etc/iscsi/iscsi.conf folgende Parameter ändern:
Code:
node.startup = automatic           
node.session.timeo.replacement_timeout = 15


  • iSCSI Target nach LUNs durchsuchen und LUNs einbinden
    • entweder iscsiadm --mode discovery --type sendtargets --login --portal 10.0.XXX.XXX
    • oder iscsiadm -m discovery -t st -l -p 10.0.XXX.XXX
    • In EVA Command View
      • Den neuen Host anlegen (Add HostiSCSI, den iSCSI Node Name des Hosts findet man unter /etc/iscsi/initiatorname.iscsi)
      • Die Vdisks dem neuen Host freigeben (present)
    • Das iscsiadm Kommando von oben wiederholen
  • Proxmox Node rebooten (TODO: Alternative hierfür herausfinden)
  • Prüfe neue Devices: multipath -v3 - bei paths list sieht man die wwid (uuid) der neuen Devices. Mehrere neue Devices entsprechen derselben LUN (sieht man an gleicher wwid), daher Multipath.
  • Neue LUN in Multipath-Config (/etc/multipath.conf) aufnehmen

Code:
  defaults {
                       polling_interval        2
                       selector                "round-robin 0"
                       path_grouping_policy    multibus
                       getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
                       rr_min_io               100
                       failback                immediate
                       no_path_retry           queue
               }

               blacklist {
                       wwid *
               }

               blacklist_exceptions {
                       wwid "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
                       wwid "wwid-von-neuer-lun"
               }         

               multipaths {
                 multipath {
                       wwid "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
                       alias vdisk001
                 }
                 multipath {
                       wwid "wwid-von-neuer-lun"
                       alias vdisk002
                 }
               }


  • Multipath-Service neu starten: service multipath-tools reload, dann service multipath-tools restart
  • Multipath Status abfragen: multipath -ll - die Ausgabe muss so oder so ähnlich aussehen (kann etwas dauern, evtl. Service nochmal neustarten):

Code:
vdisk001 (XXXXXXXXXXXXXXXXXXXXXXXXXXXXX) dm-3 HP,HSV340
               size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
               `-+- policy='round-robin 0' prio=8 status=active
                 |- 3:0:0:1  sdc 8:32  active ready  running
                 |- 1:0:0:1  sdb 8:16  active ready  running
                 |- 2:0:0:1  sdd 8:48  active ready  running
                 |- 4:0:0:1  sde 8:64  active ready  running
                 |- 12:0:0:1 sdf 8:80  active ready  running
                 |- 15:0:0:1 sdg 8:96  active ready  running
                 |- 11:0:0:1 sdh 8:112 active ready  running
                 `- 16:0:0:1 sdi 8:128 active ready  running


  • LVM physical volume (PV) und volume group (VG) auf dem multipath device (in diesem Fall /dev/dm-3) anlegen (muss nur einmal, auf einem Node im Cluster gemacht werden!)
    • PV anlegen: pvcreate /dev/dm-3
    • VG anlegen: vgcreate san1-vdisk001 /dev/dm-3
  • LVM group über Proxmox GUI hinzufügen (muss nur einmal, auf einem Node im Cluster gemacht werden!)
    • Datacenter >> Storage >> Add >> LVM group
    • Bei Base Storage den Eintrag Existing volume groups auswählen
    • Bei Volume group die zuvor auf der Command Line erstellte VG auswählen
    • Enable und Shared aktivieren >> Add
 
Hi, all!

i have other kind of problem but with EMC Clariion cx4-240 storage, maybe you can help me. It is live migration related (iscsi lvm, shared storage, lvm ha cluster keywords bring me here)
We created 3 node cluster, nodes are connected over iscsi to emc clariion cx4-240.
We configured storage like you wrote, manually from cli we created pv, vg on one node, and after that we created VM and lvm over proxmox gui.

We managed to create multipath device but we always had only one channel active. At last with manually command we get Active/Active multipath status. multipath -r -p multibus. So if you have problems here is solution. Actually we had problem with blacklisting, because one character error in wwid. It drive us crazy.
For failover, we are using simple scripts (ping check and fencing activation in case of failure) where we covered all posibile failures (iscsi, storage, network, interface). We decided for scripts because in some cases of failure it was easier to write a script, like node power failure or failure of iscsi interface.
At last to the question. We have problem with live migration, which you have working. Can you give us details how you get live migration working, where is a catch.

Here is a error we get.

Executing HA migrate for VM 104 to node proxmox1
Trying to migrate pvevm:104 to proxmox1...Failure
TASK ERROR: command 'clusvcadm -M pvevm:104 -m proxmox1' failed: exit code 255

Thanks for help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!