Proxmox Multipath and Dell MD3200i

F

ferretiz

Guest
Hello Folks

I have a Dell iSCSI MD3200i configured with 8 redundant ethernet conections for iSCSI, the IPs are:
10.10.1.1
10.10.1.2
10.10.2.1
10.10.2.2
10.10.3.1
10.10.3.2
10.10.3.1
10.10.4.2

Why I did that? Well... Dell documentation says that it is the best configuration for high availability so I decided to give it a try.

I also found the /etc/multipath.conf suggested by Dell:
devices {
device {
vendor "DELL"
product "MD32xxi"
path_grouping_policy group_by_prio
prio rdac
polling_interval 5
path_checker rdac
path_selector "round-robin 0"
hardware_handler "1 rdac"
failback immediate
features "2 pg_init_retries 50"
no_path_retry 30
rr_min_io 100
prio_callout "/sbin/mpath_prio_rdac /dev/%n"
}
device {
vendor "DELL"
product "MD32xx"
path_grouping_policy group_by_prio
prio rdac
polling_interval 5
path_checker rdac
path_selector "round-robin 0"
hardware_handler "1 rdac"
failback immediate
features "2 pg_init_retries 50"
no_path_retry 30
rr_min_io 100
prio_callout "/sbin/mpath_prio_rdac /dev/%n"
}
}

When I enter multipath -ll
mpath1 (36842b2b000570ae60000084e4d1b96a6) dm-3 DELL ,MD32xxi
[size=50G][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac]
\_ round-robin 0 [prio=9][active]
\_ 8:0:0:5 sdb 8:16 [active][ready]
\_ 3:0:0:5 sdc 8:32 [active][ready]
\_ #:#:#:# - #:# [failed][faulty]
\_ 10:0:0:5 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 9:0:0:5 sdi 8:128 [active][ghost]
\_ 6:0:0:5 sdd 8:48 [active][ghost]
\_ 4:0:0:5 sde 8:64 [active][ghost]
\_ 5:0:0:5 sdf 8:80 [active][ghost]

Except for the third item on group one, everything looks good, in fact, I was able to successfully add am iSCSI storage and assign the iSCSI LUN as an LVM Volume with proxmox. But when I enter vgs:
~# vgs
/dev/sdd: read failed after 0 of 4096 at 0: Input/output error
/dev/sde: read failed after 0 of 4096 at 0: Input/output error
/dev/sdf: read failed after 0 of 4096 at 0: Input/output error
/dev/sdi: read failed after 0 of 4096 at 0: Input/output error
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 278.37G 3.99G
pve_vg1 1 3 0 wz--n- 50.00G 5.00G

I have input/output errors, right now only one LUN is attached to proxmox. Im kinda new to proxmox and iSCSI but the /dev/sd[b-i] are supposed to be the same "physical" device right?.

I almost forgot to mention that my proxmox server has this addresses configured:
10.10.1.5
10.10.2.5
10.10.3.5
10.10.4.5

These are the targets...
:~# iscsiadm list -m node 10.10.1.1
10.10.2.1:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de
10.10.3.2:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de
10.10.1.2:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de
10.10.2.2:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de
10.10.1.1:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de
10.10.3.1:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de
10.10.4.2:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de
10.10.4.1:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6842b2b000570ae6000000004ced98de

The same error for lvs
:~# lvs
/dev/sdd: read failed after 0 of 4096 at 0: Input/output error
/dev/sde: read failed after 0 of 4096 at 0: Input/output error
/dev/sdf: read failed after 0 of 4096 at 0: Input/output error
/dev/sdi: read failed after 0 of 4096 at 0: Input/output error
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
data pve -wi-ao 189.88G
root pve -wi-ao 69.50G
swap pve -wi-ao 15.00G
vm-101-disk-1 pve_vg1 -wi-a- 10.00G
vm-102-disk-1 pve_vg1 -wi-ao 20.00G
vm-106-disk-1 pve_vg1 -wi-a- 15.00G

:~# pvs
/dev/sdd: read failed after 0 of 4096 at 0: Input/output error
/dev/sde: read failed after 0 of 4096 at 0: Input/output error
/dev/sdf: read failed after 0 of 4096 at 0: Input/output error
/dev/sdi: read failed after 0 of 4096 at 0: Input/output error
PV VG Fmt Attr PSize PFree
/dev/dm-3 pve_vg1 lvm2 a- 50.00G 5.00G
/dev/sda2 pve lvm2 a- 278.37G 3.99G

Any ideas?

btw, im using proxmox 1.7 Kernel 2.6.32-4-pve #1 SMP Fri Nov 26 06:42:28 CET 2010 x86_64 GNU/Linux
 
I am about to be configuring the same thing. I have a Dell PE R610 that I will be running Proxmox on and a Dell MD3200i. If I find anything that works I will try to post here.
 
i don't understand why you put twice Dell in the multipath.conf...?
I think it should be here only once...

if you look at the doc I've put in the wiki, you'll see what I got in my multipath.conf for a Datacore system...
http://pve.proxmox.com/wiki/ISCSI_Multipath

stop multipath, remove one device {......} definition and restart it (a reboot might be necessary...)

for example try with :
Code:
devices {
device {
vendor "DELL"
product "MD32xxi"
path_grouping_policy group_by_prio
prio rdac
polling_interval 5
path_checker rdac
path_selector "round-robin 0"
hardware_handler "1 rdac"
failback immediate
features "2 pg_init_retries 50"
no_path_retry 30
rr_min_io 100
prio_callout "/sbin/mpath_prio_rdac /dev/%n"
}
}

another thing : check that /sbin/mpath_prio_rdac exists...
 
You need to filter sd[a-z] that comes from iscsi in lvm.conf so lvm only scans your multipath devices.

example /etc/lvm/lvm.conf:

Code:
filter = [ "a|/dev/cciss/.*|", "a|/dev/mapper/mpath[0-9]+|", "r/.*/" ]

I've got system on HP P4XX raid array so I got first rule that includes /dev/cciss/* devices, if Your system is on /dev/sda You shoud put something like:

Code:
filter = [ "a|/dev/sda|", "a|/dev/mapper/mpath[0-9]+|", "r/.*/" ]
 
Last edited:
Hi,
I am newbie in proxmox

I am having some similar issue, may be somebody can help me, please.

I have a , CMC-67R28Y1-PowerEdge VRTX and I had installed 2 Proxmox in two different slot, so I wan to create a QUORUM, whit the VDISK in order to be accesible from proxmox Cluster. Someone can help me.

Thanks

Tavo.
 
Please do not just write "please help", instead try to configure it by yourself - following the docs and howtos, if you have any specific question or problem open a new thread.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!