Proxmox 4.2-11 multipath issue

I have the same situation with Proxmox 4.2, the multipath not presents nothing

multipath -ll

after I run the command data is presented multipath
multipath -v2

any idea?
 
guys,

actually no solution yet...
i'll try to look deeper at notes proposed by braddy31 in the next 2/3 days, then this thread will be updated with our results....

pls note i've seen some updates on the wiki page realted to multipath over iscsi configuration (http://pve.proxmox.com/wiki/ISCSI_Multipath) these updates have been posted on june 21!! (so it seems someone from staff is following this issue)

i'll keep you all updated,
regards,

francesco
 
@Maurizio Marini, @rickygm

I've tested on one the four nodes of the cluster the multipath configuration building it from scratch (totally rewriting the multipath.conf file as stated on the wiki article, all according with v.4.x instructions) without success!!

@braddy33

i've read your notes,
i've tested the scenario issuing the commands to set queue length for the 4 devices we've connected in multipath mode:

echo 512 > /sys/block/sdc/queue/max_sectors_kb
echo 512 > /sys/block/sdd/queue/max_sectors_kb
echo 512 > /sys/block/sde/queue/max_sectors_kb
echo 512 > /sys/block/sdf/queue/max_sectors_kb

then restarted multipath services...

the multipath -ll output is still the same....

if i run multipath -d
th system reports multipaths correctly formed but not active...

well,

actually.... we've no solution yet!!

waiting your suggestions or results of your tests....

regards,

francesco
 
@Maurizio Marini, @rickygm

I've tested on one the four nodes of the cluster the multipath configuration building it from scratch (totally rewriting the multipath.conf file as stated on the wiki article, all according with v.4.x instructions) without success!!

@braddy33

i've read your notes,
i've tested the scenario issuing the commands to set queue length for the 4 devices we've connected in multipath mode:

echo 512 > /sys/block/sdc/queue/max_sectors_kb
echo 512 > /sys/block/sdd/queue/max_sectors_kb
echo 512 > /sys/block/sde/queue/max_sectors_kb
echo 512 > /sys/block/sdf/queue/max_sectors_kb

then restarted multipath services...

the multipath -ll output is still the same....

if i run multipath -d
th system reports multipaths correctly formed but not active...

well,

actually.... we've no solution yet!!

waiting your suggestions or results of your tests....

regards,

francesco

Hi, I do the following, is to add the multipath -v2 command in the rc.local file, It is not the best, but it's something, I hope the developers are working on this problem. If you reboot the server should see the multipath working
 

Attachments

  • Captura de pantalla 2016-07-15 a las 10.22.33 AM.png
    Captura de pantalla 2016-07-15 a las 10.22.33 AM.png
    27.6 KB · Views: 22
@rickygm,

I'll try your suggestion..
I'll be on-site again this week so i may implement it on one fo the four nodes, then if working, i'll replicate on the others!

I'll keep you all updated on test results!!

regards,
francesco
 
@rickygm,

as i promise...
i've tested your fix against one of the servers...

no changes...
just after reboot, if you run:

multipath -ll

nothing is reported as active multipath

the only way to see something reported is to run

multipath -d

but this only confirms config file params validity without setting any object inside /dev/mapper structures...

you'll find attached the command output of:

multipath -v3

reporting some "failure" during multipath build:

mpath0: domap (0) failure for create/reload map

looking around the most similar issue seems this:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=586182

waiting other ideas or suggestions

regards,

francesco
 

Attachments

@rickygm,

as i promise...
i've tested your fix against one of the servers...

no changes...
just after reboot, if you run:

multipath -ll

nothing is reported as active multipath

the only way to see something reported is to run

multipath -d

but this only confirms config file params validity without setting any object inside /dev/mapper structures...

you'll find attached the command output of:

multipath -v3

reporting some "failure" during multipath build:

mpath0: domap (0) failure for create/reload map

looking around the most similar issue seems this:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=586182

waiting other ideas or suggestions

regards,

francesco

Hi, stranger, I'll show you the version of the package installed by default debían

dpkg -s multipath-tools | grep Version

Version: 0.5.0-6+deb8u2
 

Attachments

  • Captura de pantalla 2016-07-22 a las 8.24.28 AM.png
    Captura de pantalla 2016-07-22 a las 8.24.28 AM.png
    31.8 KB · Views: 12
@rickygm,

actually unable to give you this info...
may be Matteo (he's the it manager at customer site)
will be able to post this info next monday!

many thanks again for your attention,

regards,

francesco
 
still no solution on our side....
i'll be on-site tomorrow,
i'll try some actions i'm minding on....

keep you all updated!

regards,
francesco
 
@rickygm

on-site just now!!

as you requested me, going to retrieve infos.....

---------------------------------------------------------

dpkg -s multipath-tools | grep Version

returns:

Version: 0.5.0-6+deb8u2

---------------------------------------------------------
multipath -ll

returns:
nothing!!!

---------------------------------------------------------

multipsth -d

returns:

create: mpatha (3600508b1001030393632423838300700) undef HP,LOGICAL VOLUME
size=80G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
`- 3:1:0:0 sda 8:0 undef ready running
create: mpathb (3600508b1001030393632423838300800) undef HP,LOGICAL VOLUME
size=193G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
`- 3:1:0:1 sdb 8:16 undef ready running
create: mpath0 (3600c0ff000271f5d02ff105701000000) undef HP,MSA 2040 SAN
size=558G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:0 sdc 8:32 undef ready running
`- 4:0:0:0 sde 8:64 undef ready running
create: mpath1 (3600c0ff00027217d28ff105701000000) undef HP,MSA 2040 SAN
size=4.1T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:1 sdd 8:48 undef ready running
`- 4:0:0:1 sdf 8:80 undef ready running

---------------------------------------------------------

here is our multipath.conf

cat /etc/multipath.conf

defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(td|hd)[a-z]"
devnode "^dcssblk[0-9]*"
devnode "^cciss!c[0-9]d[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "EMC"
product "LUNZ"
}
device {
vendor "IBM"
product "Universal Xport"
}
device {
vendor "IBM"
product "S/390.*"
}
device {
vendor "DELL"
product "Universal Xport"
}
device {
vendor "SGI"
product "Universal Xport"
}
device {
vendor "STK"
product "Universal Xport"
}
device {
vendor "SUN"
product "Universal Xport"
}
device {
vendor "(NETAPP|LSI|ENGENIO)"
product "Universal Xport"
}
}
blacklist_exceptions {
wwid "3600c0ff000271f5d02ff105701000000"
wwid "3600c0ff00027217d28ff105701000000"
}
multipaths {
multipath {
wwid "3600c0ff000271f5d02ff105701000000"
alias mpath0
}
multipath {
wwid "3600c0ff00027217d28ff105701000000"
alias mpath1
}
}

---------------------------------------------------------

i've seen inside multipath dry run that local, single path devices are listed as mpatha and mpathb,
i think modifying the second row into blacklist section as follows may solve this discrepancy

devnode "^(td|hd|sd)[a-z]"

waiting your ideas about our config!!

regards,

francesco
 
@rickygm

on-site just now!!

as you requested me, going to retrieve infos.....

---------------------------------------------------------

dpkg -s multipath-tools | grep Version

returns:

Version: 0.5.0-6+deb8u2

---------------------------------------------------------
multipath -ll

returns:
nothing!!!

---------------------------------------------------------

multipsth -d

returns:

create: mpatha (3600508b1001030393632423838300700) undef HP,LOGICAL VOLUME
size=80G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
`- 3:1:0:0 sda 8:0 undef ready running
create: mpathb (3600508b1001030393632423838300800) undef HP,LOGICAL VOLUME
size=193G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
`- 3:1:0:1 sdb 8:16 undef ready running
create: mpath0 (3600c0ff000271f5d02ff105701000000) undef HP,MSA 2040 SAN
size=558G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:0 sdc 8:32 undef ready running
`- 4:0:0:0 sde 8:64 undef ready running
create: mpath1 (3600c0ff00027217d28ff105701000000) undef HP,MSA 2040 SAN
size=4.1T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:1 sdd 8:48 undef ready running
`- 4:0:0:1 sdf 8:80 undef ready running

---------------------------------------------------------

here is our multipath.conf

cat /etc/multipath.conf

defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(td|hd)[a-z]"
devnode "^dcssblk[0-9]*"
devnode "^cciss!c[0-9]d[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "EMC"
product "LUNZ"
}
device {
vendor "IBM"
product "Universal Xport"
}
device {
vendor "IBM"
product "S/390.*"
}
device {
vendor "DELL"
product "Universal Xport"
}
device {
vendor "SGI"
product "Universal Xport"
}
device {
vendor "STK"
product "Universal Xport"
}
device {
vendor "SUN"
product "Universal Xport"
}
device {
vendor "(NETAPP|LSI|ENGENIO)"
product "Universal Xport"
}
}
blacklist_exceptions {
wwid "3600c0ff000271f5d02ff105701000000"
wwid "3600c0ff00027217d28ff105701000000"
}
multipaths {
multipath {
wwid "3600c0ff000271f5d02ff105701000000"
alias mpath0
}
multipath {
wwid "3600c0ff00027217d28ff105701000000"
alias mpath1
}
}

---------------------------------------------------------

i've seen inside multipath dry run that local, single path devices are listed as mpatha and mpathb,
i think modifying the second row into blacklist section as follows may solve this discrepancy

devnode "^(td|hd|sd)[a-z]"

waiting your ideas about our config!!

regards,

francesco

Hi francesco, sorry for the delay, I'm doing my bags to leave next week to USA , I put my multipath configuration

defaults {

polling_interval 3

path_selector "round-robin 0"

max_fds "max"

path_grouping_policy multibus

uid_attribute "ID_SERIAL"

rr_min_io 100

failback immediate

no_path_retry queue

}

blacklist {

devnode "^sd[a]$"

}

multipaths {

multipath {

wwid "36589cfc0000008918d21b0349e2746ea"

alias vDisk

}


multipath {

wwid "36589cfc0000009f6a7b559ffc35e5015"

alias vDFAST

}


}

this is a iscsi , in another environment I have an HP EVA 6000 with FC , I have a question, I see in your settings you have two different storages, what models that you have? .
 
@rickygm

first of all, many thanks again for your time!!

extracted from the apening post of this thread:

we've a 3 nodes cluster running on v.4.x using a shared storage base on HP MSA2040FC...

our issue have started after update of 1 node....
we've done upgrades to packages using the enterprise repository (on june 7th 2016!)

this upgrades has touched kernel and a lot of submodules....
we've four nodes, all double connected with fibers directly to MSA controllers..
the strange was that all was working fine before upgrade of kernel!! (note, we was using the old multipath.conf file format just as was stated on old version of the wiki page about iscsi and multipath config)
some warnings were present into syslog (deprecations about new ID_SERIAL usage suggestion but all was working fine!!)

i'll try your latest configuration cleaned-up file ASAP!!

keep you and all other thread followers updated,
regards....

PS: have a nice trip!!!

regards,
Francesco
 
@rickygm

first of all, many thanks again for your time!!

extracted from the apening post of this thread:

we've a 3 nodes cluster running on v.4.x using a shared storage base on HP MSA2040FC...

our issue have started after update of 1 node....
we've done upgrades to packages using the enterprise repository (on june 7th 2016!)

this upgrades has touched kernel and a lot of submodules....
we've four nodes, all double connected with fibers directly to MSA controllers..
the strange was that all was working fine before upgrade of kernel!! (note, we was using the old multipath.conf file format just as was stated on old version of the wiki page about iscsi and multipath config)
some warnings were present into syslog (deprecations about new ID_SERIAL usage suggestion but all was working fine!!)

i'll try your latest configuration cleaned-up file ASAP!!

keep you and all other thread followers updated,
regards....

PS: have a nice trip!!!

regards,
Francesco

Thank coppola_f , put a new clean configuration and remember differentiate local disks of servers with the MSA shows you, add your local disks in blacklist in multipath.conf

lsscsi -s is your friend , I have attached an example

PS: I have not seen much progress in the debian to solve this problem...
https://lists.debian.org/debian-user/2016/04/msg00839.html
 

Attachments

  • local-storage.JPG
    local-storage.JPG
    38.9 KB · Views: 16
@rickygm
on-site just now,

for operative needings, unable to operate on node1 of the cluster (the original presenting issue!)
btw, we've total four nodes, node3 has been updated at the same s/w level of node1....

so...

before touching node3 the situation was the same as node1 (same s/w versions and results from multipath -ll, exact same configuration file.

well,

we've moved active vm's away from node,
then modified multipath.conf file following your instructions:

defaults {
polling_interval 3
path_selector "round-robin 0"
max_fds "max"
path_grouping_policy multibus​
uid_attribute "ID_SERIAL"
rr_min_io 100
failback immediate
no_path_retry queue
}
blacklist {
devnode "^sd[ab]$"
}
multipaths {
multipath {
wwid "3600c0ff000271f5d02ff105701000000"
alias mpath0
}
multipath {
wwid "3600c0ff00027217d28ff105701000000"
alias mpath1
}
}​

restarted multipath daemon without success,
restarted server with same results....

i've attached part of the file extracted from proxmox console syslog section just after server boot....

see on row 104 first multipath-tools message...
followed by kernel / device mapper errors....

the new blacklist section, seems to be working fine (/dev/sda and /dev/sdb that are local node disks, single channel connected on local raid card has disappeared from arbitration of multipath) but mpath0 and mpath1 devices (/dev/sdc + /dev/sde --> mpath0 // /dev/sdd + /dev/sdf --> mpath1) results still missing!!

finally:

multipath -ll

still returning nothing!!!

multipath -d

still presenting output as follows

create: mpath0 (3600c0ff000271f5d02ff105701000000) undef HP,MSA 2040 SAN
size=558G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:0 sdc 8:32 undef ready running
`- 4:0:0:0 sde 8:64 undef ready running
create: mpath1 (3600c0ff00027217d28ff105701000000) undef HP,MSA 2040 SAN
size=4.1T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:1 sdd 8:48 undef ready running
`- 4:0:0:1 sdf 8:80 undef ready running​

waiting your ideas about these infos....

many thanks again for your time,

regards,

Francesco
 

Attachments

guys,

googling around i've found this article:

https://access.redhat.com/discussions/1307143

it refers to another page with a solution (this page is covered under redhat subscription section of the KB)
i'm unable to access this because i've not this kind of access on redhat site.

so, if someone has an active registered access to this level of the site and the ability to inspect the "solution" contents...

then verify if the solution may apply to our issues....

it may be very helpful!!

many thanks again for your time!

regards,

francesco
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!