[SOLVED] iSCSI multipath

101pipers

New Member
Sep 18, 2018
9
0
1
48
Hi,

We are trying to deploy an HA storage environment on Proxmox using multipath, a Synology NAS (RS1619xs+) and an iSCSI SAN, and we cannot get the multipath driver to manage the iSCSI LUNs. We need a little help to go further or to interpret the errors we are commiting.

This is what we are doing:

On the Synology side:
- We made 2 LACP bonds made of 2 interfaces each, and IP addressing scheme on separate VLANs, having the targets on two "portals", lets say: 192.168.10.1/24 and 192.168.11.1/24
- We have created a LUN associated to this target (thick provisioning, header and data digest, "permit multiple sessions" and no CHAP auth, default values for max send/receive segment bytes)

On Proxmox side:
- Installed multipath tools
- Make discovery and mapping of the 2 connections on the host (), so we have two physical devices sdc and sdd
iscsiadm -m discovery -t st -p 192.168.10.1 --> GET TARGET NAME
iscsiadm -m node -l -T iqn.target_name -p 192.168.10.1
iscsiadm -m node -l -T iqn.target_name -p 192.168.11.1
- Configure Multipath driver (/etc/multipath.conf)
# cat /etc/multipath.conf
blacklist {
wwid .*
}

blacklist_exceptions {
wwid "36001405ff5844dcd9892d452bd8f30da"
}
multipaths {
multipath {
wwid "36001405ff5844dcd9892d452bd8f30da"
alias synology
}
}
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
devices {
device{
vendor "*"
product "Universal Xport"
path_checker tur
path_grouping_policy multibus
failback immediate
rr_min_io 100
rr_weight priorities
no_path_retry 5
}
}
- Restart multipath service
- This is the log that we see:
# multipath -v3 -d -ll
Jun 10 18:11:52 | set open fds limit to 1048576/1048576
Jun 10 18:11:52 | loading //lib/multipath/libchecktur.so checker
Jun 10 18:11:52 | checker tur: message table size = 3
Jun 10 18:11:52 | loading //lib/multipath/libprioconst.so prioritizer
Jun 10 18:11:52 | foreign library "nvme" loaded successfully
Jun 10 18:11:52 | sdb: blacklisted, udev property missing
Jun 10 18:11:52 | sda: udev property ID_WWN whitelisted
Jun 10 18:11:52 | sda: mask = 0x7
Jun 10 18:11:52 | sda: dev_t = 8:0
Jun 10 18:11:52 | sda: size = 6563452592
Jun 10 18:11:52 | sda: vendor = HPE
Jun 10 18:11:52 | sda: product = LOGICAL VOLUME
Jun 10 18:11:52 | sda: rev = 2.65
Jun 10 18:11:52 | sda: h:b:t:l = 0:1:0:0
Jun 10 18:11:52 | sda: tgt_node_name =
Jun 10 18:11:52 | sda: path state = running
Jun 10 18:11:52 | sda: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jun 10 18:11:52 | sda: serial = PEYHC0DRHDF4BD
Jun 10 18:11:52 | sda: get_state
Jun 10 18:11:52 | sda: detect_checker = yes (setting: multipath internal)
Jun 10 18:11:52 | failed to issue vpd inquiry for pgc9
Jun 10 18:11:52 | sda: path_checker = tur (setting: multipath internal)
Jun 10 18:11:52 | sda: checker timeout = 30 s (setting: kernel sysfs)
Jun 10 18:11:52 | sda: tur state = up
Jun 10 18:11:52 | sdc: udev property ID_WWN whitelisted
Jun 10 18:11:52 | sdc: mask = 0x7
Jun 10 18:11:52 | sdc: dev_t = 8:32
Jun 10 18:11:52 | sdc: size = 10737418240
Jun 10 18:11:52 | sdc: vendor = SYNOLOGY
Jun 10 18:11:52 | sdc: product = iSCSI Storage
Jun 10 18:11:52 | sdc: rev = 4.0
Jun 10 18:11:52 | sdc: h:b:t:l = 2:0:0:1
Jun 10 18:11:52 | sdc: tgt_node_name = iqn.2000-01.com.synology:iscsi-02.Target-1.eae08ecda9
Jun 10 18:11:52 | sdc: path state = running
Jun 10 18:11:52 | sdc: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jun 10 18:11:52 | sdc: serial = ff5844dc-9892-452b-8f30-a1e70eaf26ee
Jun 10 18:11:52 | sdc: get_state
Jun 10 18:11:52 | sdc: detect_checker = yes (setting: multipath internal)
Jun 10 18:11:52 | failed to issue vpd inquiry for pgc9
Jun 10 18:11:52 | sdc: path_checker = tur (setting: storage device autodetected)
Jun 10 18:11:52 | sdc: checker timeout = 30 s (setting: kernel sysfs)
Jun 10 18:11:52 | sdc: tur state = up
Jun 10 18:11:52 | sdd: udev property ID_WWN whitelisted
Jun 10 18:11:52 | sdd: mask = 0x7
Jun 10 18:11:52 | sdd: dev_t = 8:48
Jun 10 18:11:52 | sdd: size = 10737418240
Jun 10 18:11:52 | sdd: vendor = SYNOLOGY
Jun 10 18:11:52 | sdd: product = iSCSI Storage
Jun 10 18:11:52 | sdd: rev = 4.0
Jun 10 18:11:52 | sdd: h:b:t:l = 3:0:0:1
Jun 10 18:11:52 | sdd: tgt_node_name = iqn.2000-01.com.synology:iscsi-02.Target-1.eae08ecda9
Jun 10 18:11:52 | sdd: path state = running
Jun 10 18:11:52 | sdd: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jun 10 18:11:52 | sdd: serial = ff5844dc-9892-452b-8f30-a1e70eaf26ee
Jun 10 18:11:52 | sdd: get_state
Jun 10 18:11:52 | sdd: detect_checker = yes (setting: multipath internal)
Jun 10 18:11:52 | failed to issue vpd inquiry for pgc9
Jun 10 18:11:52 | sdd: path_checker = tur (setting: storage device autodetected)
Jun 10 18:11:52 | sdd: checker timeout = 30 s (setting: kernel sysfs)
Jun 10 18:11:52 | sdd: tur state = up
Jun 10 18:11:52 | loop0: blacklisted, udev property missing
Jun 10 18:11:52 | loop1: blacklisted, udev property missing
Jun 10 18:11:52 | loop2: blacklisted, udev property missing
Jun 10 18:11:52 | loop3: blacklisted, udev property missing
Jun 10 18:11:52 | loop4: blacklisted, udev property missing
Jun 10 18:11:52 | loop5: blacklisted, udev property missing
Jun 10 18:11:52 | loop6: blacklisted, udev property missing
Jun 10 18:11:52 | loop7: blacklisted, udev property missing
Jun 10 18:11:52 | dm-0: blacklisted, udev property missing
Jun 10 18:11:52 | dm-1: blacklisted, udev property missing
Jun 10 18:11:52 | dm-2: blacklisted, udev property missing
Jun 10 18:11:52 | dm-3: blacklisted, udev property missing
Jun 10 18:11:52 | dm-4: blacklisted, udev property missing
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
0:1:0:0 sda 8:0 -1 undef undef HPE,LOGICAL VOLUME unknown
2:0:0:1 sdc 8:32 -1 undef undef SYNOLOGY,iSCSI Storage unknown
3:0:0:1 sdd 8:48 -1 undef undef SYNOLOGY,iSCSI Storage unknown
Jun 10 18:11:52 | libdevmapper version 1.02.155 (2018-12-18)
Jun 10 18:11:52 | DM multipath kernel driver v1.13.0
Jun 10 18:11:53 | unloading const prioritizer
Jun 10 18:11:53 | unloading tur checker

- We can't see any multipath managed iSCSI device:
# multipath -ll --> Returns nothing
- If we try to create any VG on sdc or sdd we get an error, because it sees a duplicated disk:
# pvcreate /dev/sdc
WARNING: Not using device /dev/sdd for PV 303nyq-hCRA-ZX23-IuUR-c2Np-Rli5-EzEDXt.
WARNING: PV 303nyq-hCRA-ZX23-IuUR-c2Np-Rli5-EzEDXt prefers device /dev/sdc because device was seen first.
Cannot use device /dev/sdc with duplicates.

Our questions...
- Is there any multipath.conf file sample for our Storage device or several options that we should use?
- What does the error "failed to issue vpd inquiry for pgc9" mean? Is that why we can't get our multipathed iSCSI device (/dev/mapper/mpath) working on our host?
- Should we use any other arch (NFS, not use LACP, etc...) better? We are looking for a HA storage environment, load balancing if it can be deployed, and high performance on the links (posibility to use the 2GBbps LACP link)
- Is there any other recommended configuration regarding switching (a.k.a. Jumbo Frames) or hardware offload functions in NICs (not HBAs)?

Thanks in advance
 
Last edited:
- Is there any multipath.conf file sample for our Storage device or several options that we should use?

Please as the vendor of your storage, but in general, iSCSI is as simple as FC to set up, it just works for most devices.

- Is there any other recommended configuration regarding switching (a.k.a. Jumbo Frames) or hardware offload functions in NICs (not HBAs)?

That depends heavily on your hardware. It you have hardware that supports offloading, use it. Jumbo frames are also common in iSCSI, so it will increase the performance.

I can recommend to start with an almost empty multipath.conf and see if that works better. I personally, don't like the blacklist everything and unblacklist setup, I like to blacklist devices (e.g. your local raid controller) so that everything else directly passes though multipath.
 
Will give a try with simpler multipath.conf to see if we can't get the driver working, and post back the results
Thanks again...
 
Finally we could make it work with no problem. Just didn't use multipath tool properly in the last step (didn't add the device with the -a flag!). Did the same on all cluster hosts, created manually PV and VG and added to the cluster as LVM shared between nodes. It's working!
Thanks again,
 
Please, can anybody tell me how to mark this thread as solved?
Edit the first post in this thread. The dropdown next to the title should have a `solved` item.
 
I have the same issue.
Could you please explain and give an example of "didn't add the device with the -a flag!" ?
Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!