Multipath iSCSI?

royceybaby

New Member
Dec 15, 2009
9
0
1
Hi Guys,

We are testing Proxmox and have a Dell MD3000i iSCSI array. Is it possible to setup multipath with proxmox and use this configuration in a proxmox cluster? Has anybody tried this?

Thanks,
Royce
 
Last edited:

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,078
490
103
Austria
www.proxmox.com
We are testing Proxmox and have a Dell MD3000i iSCSI array. Is it possible to setup multipath with proxmox and use this configuration in a proxmox cluster? Has anybody tried this?

AFAIK you need to install and configure the 'multipath-tools' package (never tested myself).
 

royceybaby

New Member
Dec 15, 2009
9
0
1
:p I have got it working! Doing some more testing for failover purposes.

I have taken notes on how I did this if you are interested for the Wiki?

Royce
 

jhammer

Member
Dec 21, 2009
55
1
6
I would be interested in this as well. Any chance of getting it up on the wiki?

How did your testing for failover purposes go? Also, does it work if the hosts are clustered?

Thanks,

James
 

whinpo

Member
Jan 11, 2010
140
0
16
Hi,

I'm new to Proxmox, I've formerly used Xen (GNU version) and made tests on Xenserver which didn't convince me (too strict on servers configuration to allow live migration, using a 1 year only free key...)

I'm starting on Proxmox and already made some successfull tests (Debian, Windows)

I've got an iSCSI System (Datacore SanMelody) and was able to do multipath :


Code:
# aptitude install multipath-tools
modify /etc/iscsi/iscsid.conf to allow automatic login to the targets by uncommenting/commenting the following lines
Code:
node.startup = automatic
#node.startup = manual
Depending on your SAN system you may have to configure things in /etc/iscsi/iscsid.conf like :
Code:
# To specify the length of time to wait for session re-establishment
# before failing SCSI commands back to the application when running
# the Linux SCSI Layer error handler, edit the line.
# The value is in seconds and the default is 120 seconds.
node.session.timeo.replacement_timeout = 120
Add on the GUI the iSCSI targets you need to connect to, and then you'll be able to check the multipath is working via :
Code:
# multipath -l
360030d904b564d2d5352000000000000dm-7 DataCore,SANmelody    
[size=2.0T][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 3:0:0:1 sdd 8:48  [active][undef]
\_ round-robin 0 [prio=0][enabled]
 \_ 4:0:0:1 sde 8:64  [active][undef]

It is working for me, if you think this is ok, I can update the wiki
 

jhammer

Member
Dec 21, 2009
55
1
6
I have multipath setup...

prism:~# multipath -ll
CYBERDISK_2 (20000000000000000000b560021b518e1) dm-7 CYBERNET,iSAN Vault
[size=20G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 9:0:0:0 sdd 8:48 [active][ready]
\_ 8:0:0:0 sdc 8:32 [active][ready]
\_ 10:0:0:0 sde 8:64 [active][ready]
mpath1 (SATA_ST3500320NS_5QM090RW) dm-3 ATA ,ST3500320NS
[size=466G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 3:0:0:0 sdb 8:16 [active][ready]
CYBERDISK_1 (20000000000000000000b560021b518e0) dm-6 CYBERNET,iSAN Vault
[size=20G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 13:0:0:0 sdh 8:112 [active][ready]
\_ 12:0:0:0 sdg 8:96 [active][ready]
\_ 11:0:0:0 sdf 8:80 [active][ready]
CYBERDISK_3 (20000000000000000000b560021b518e2) dm-8 CYBERNET,iSAN Vault
[size=10G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 14:0:0:0 sdi 8:128 [active][ready]
\_ 15:0:0:0 sdj 8:144 [active][ready]
\_ 16:0:0:0 sdk 8:160 [active][ready]

I add the iSCSI target successfully in Proxmox. But when I try to add a volume group based on that iSCSI target I get the following error:

Error: command '/sbin/pvcreate --metadatasize 250k /dev/disk/by-id/scsi-20000000000000000000b560021b518e0' failed with exit code 5

It appears to be pointing at a non-multipath device:

prism:/dev/disk/by-id# ls -al /dev/disk/by-id/scsi-20000000000000000000b560021b518e0
lrwxrwxrwx 1 root root 9 Feb 3 15:57 /dev/disk/by-id/scsi-20000000000000000000b560021b518e0 -> ../../sdg

Whereas it's alias points at dm-6 instead of sg0:

prism:/dev/disk/by-id# ls -al /dev/disk/by-id/scsi-CYBERDISK_1
lrwxrwxrwx 1 root root 10 Feb 3 16:03 /dev/disk/by-id/scsi-CYBERDISK_1 -> ../../dm-6

How does one add a multipath iSCSI device via the proxmox interface?

Thanks!
 

udo

Famous Member
Apr 22, 2009
5,918
180
83
Ahrensburg; Germany
Hi,
pvcreate must be done on the multipath-device and not on one of the device directly.

Code:
pvcreate /dev/dm-6
should work.
I had with multipath create the volumegroup on the shell and add them in the web-frontend.

Udo
 

jhammer

Member
Dec 21, 2009
55
1
6
Thank you! This worked for me.

I did:

Code:
pvcreate /dev/dm-6


then:

Code:
vgcreate TestVol /dev/dm-6


TestVol then appeared for me in the web interface under "Existing volume groups".

Thanks!
 

jhammer

Member
Dec 21, 2009
55
1
6
Are there any plans to integrate iSCSI multipath into the Proxmox web interface?

Here is the basic process I go through currently to add a disk:

  1. Create disk on SAN
  2. Grant access to that disk to the appropriate user/network
  3. # iscsiadm -m discovery -t st -p [SAN IP]
  4. # iscsiadm -m node --loginall=automatic
  5. Create multipath entry in /etc/multipath.conf (if desired)
  6. # /etc/init.d/multipath-tools reload
  7. # pvcreate /dev/dm-8 (where /dev/dm-8 is the new multipath device)
  8. # vgcreate [NameOfVolume] /dev/dm-8
  9. On the storage section of the web interface, add an LVM group using the new [NameOfVolume] volume under "Existing volume groups".
 
  • Like
Reactions: manu

althalus1969

New Member
Oct 15, 2010
3
0
1
Berlin, Germany, Germany
Hi Guys,

I still don't get the Cluster part... Do I have to create the multipath device on the cluster nodes? And if yes, how does the Volume Group stuff work then?
I am a bit confused about this.
 

jhammer

Member
Dec 21, 2009
55
1
6
You do have to login to the iscsi target on all nodes in the cluster. However, you just need to do the pvcreate and vgcreate on a single node. That, I believe, writes the physical and volume group information to the disks. Then, since all nodes should be logged in to that iscsi target, all nodes in the cluster should have access to the physical and volume group information. You can run vgs and pvs on each node to make sure they can all see the volume groups.

The actual multipath devices (e.g. /dev/dm-8) do not need to be the same on all the nodes. In fact, they will likely be different. The proxmox cluster, I believe, relies on the volume group name and not the multipath device file.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!