Proxmox 4.2-11 multipath issue

@rickygm
on-site just now,

for operative needings, unable to operate on node1 of the cluster (the original presenting issue!)
btw, we've total four nodes, node3 has been updated at the same s/w level of node1....

so...

before touching node3 the situation was the same as node1 (same s/w versions and results from multipath -ll, exact same configuration file.

well,

we've moved active vm's away from node,
then modified multipath.conf file following your instructions:

defaults {
polling_interval 3
path_selector "round-robin 0"
max_fds "max"
path_grouping_policy multibus​
uid_attribute "ID_SERIAL"
rr_min_io 100
failback immediate
no_path_retry queue
}
blacklist {
devnode "^sd[ab]$"
}
multipaths {
multipath {
wwid "3600c0ff000271f5d02ff105701000000"
alias mpath0
}
multipath {
wwid "3600c0ff00027217d28ff105701000000"
alias mpath1
}
}​
restarted multipath daemon without success,
restarted server with same results....

i've attached part of the file extracted from proxmox console syslog section just after server boot....

see on row 104 first multipath-tools message...
followed by kernel / device mapper errors....

the new blacklist section, seems to be working fine (/dev/sda and /dev/sdb that are local node disks, single channel connected on local raid card has disappeared from arbitration of multipath) but mpath0 and mpath1 devices (/dev/sdc + /dev/sde --> mpath0 // /dev/sdd + /dev/sdf --> mpath1) results still missing!!

finally:

multipath -ll

still returning nothing!!!

multipath -d

still presenting output as follows

create: mpath0 (3600c0ff000271f5d02ff105701000000) undef HP,MSA 2040 SAN
size=558G features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:0 sdc 8:32 undef ready running
`- 4:0:0:0 sde 8:64 undef ready running
create: mpath1 (3600c0ff00027217d28ff105701000000) undef HP,MSA 2040 SAN
size=4.1T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
|- 2:0:0:1 sdd 8:48 undef ready running
`- 4:0:0:1 sdf 8:80 undef ready running​

waiting your ideas about these infos....

many thanks again for your time,

regards,

Francesco

Hi , Strange what you mention in relation to mpath0 and mpath1, could put the output of these two commands:

ls -l /dev/disk/by-path -l /dev/mapper/*

dmesg | grep -i "attached"
 
guys....
no one answering?? no one has suggestions?!?

it seems me really strange!!

where are developers?!?
sure they reads the forums...
i'm going to remember them that all was running fine prior to kernel updates!!
that kernel updates were released on subscription repos!! not the public or testing ones!!!

i still think this platform is a great project...
but....

finding help is becoming hard!!

regards,

francesco
 
Hi Francesco,
I see your thread about this and I'm a little bit worried about that.
I'm planning to reinstall Proxmox 4.3 on a running server that support multipath (due to internal SAN that support multipath).
I remember that when i first installed the system, I had to install debian and then converting to proxmox to be able to get the right version working.
Did you try to download the latest multipath-tools from ubuntu repositories or compile it by yourself to latest version (I noticed that you are not using the latest one).
If you want you can add me to skype and we can have some test on my system (I'm Italian too, so I guess we can have the same working time).
Let me know
 
Hi
Most problems related to multipaths are usually related to different SANs / multipath configuration, and has little to do with Virtualization.
I am using multipath in my testlab and it's working.
What I noticed with multipath, is that it sometime take times before the udev get the LUN scsi_id. Only when the scsi_id is there can the multipath daemon "coalesce" (damned) the device into a new path.

To check this install the lsscsi package and check with

lsscsi -i

that all your iscsi LUNs are there *with an associated scsi_id*
 
@manu ...

i've received notification from my customer that latest updates (done yesterday afternoon...)
seems to solve this issue,
i'll be on-site on next week so may be i can be more precise about versions of packages!!

if you've requests about versions,
tell me which infos may be helpful and i'll provide them on this forum...

regards,
francesco
 
Guys,

latest updates,
i confirm... after updating proxmox nodes (latest updates as i said in my previous post...)
all seems running fine,

Matteo (he's my customer's IT Manager)
has applied all updates from the subscription repo....
then made an old kernels clean-up....

he used the attached multipath.conf file on all nodes and restarted them....
as results from multipath -ll now the multipathed devs are listed correctly!

actually we've the versions listed in file pveversion-v.txt running on nodes and all is running smootly!!

regards,
francesco
 

Attachments

Guys,

latest updates,
i confirm... after updating proxmox nodes (latest updates as i said in my previous post...)
all seems running fine,

Matteo (he's my customer's IT Manager)
has applied all updates from the subscription repo....
then made an old kernels clean-up....

he used the attached multipath.conf file on all nodes and restarted them....
as results from multipath -ll now the multipathed devs are listed correctly!

actually we've the versions listed in file pveversion-v.txt running on nodes and all is running smootly!!

regards,
francesco

Excelent, Congratulations
 
Last edited:
yes,
the customer has an active subscription (basic but active!!)

if you remember,
in one of my first posts....
i was just pointing out that an update,
released on payment subscription repositry was generating this issue....

bye
francesco
 
For everyone, like me, that use multipath for boot disk (sda) you have to :

modify /etc/lvm/lvm.conf adding:
filter = [ "a|/dev/disk/by-id/.*|", "r|.*|" ]
types = [ "device-mapper", 1 ]
preferred_names = [ "^/dev/mpath/", "^/dev/mapper/", "^/dev/disk/by-id/", "^/dev/[hs]d", "^/dev/dm" ]
restart multipath:
/etc/init.d/multipath-tools-boot restart; /etc/init.d/multipath-tools restart
reboot server then
update-initramfs -u -k all
reboot server
update-grub
reboot server

if everything is ok, update-grub command should not print error messagges about inaccessible device.

Hope this help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!