iSCSI Multipath Preferred network

Pavletto

New Member
Sep 13, 2023
12
1
1
42
Hi everyone!
I would like to ask you for help:
I have a Proxmox 8 installation with next networks:
vmbr0 (10Gbps network) Management network/proxmox web-interface (192.168.10.0/24)
vmbr40 (40Gbps network) trunk connection with vlan awareness enabled
vlan9 (part of vmbr40 iface) (192.168.25.0/24)
I'm trying to add iSCSI lun (on QNAP) with multipath enabled (on both 10.0 and 25.0 network) but with 25.0 network preferrable and 10.0 used just for failover.
So i'd installed multipath-tools. Then:
Code:
iscsiadm -m discovery -t st -p 192.168.10.100
iscsiadm -m node -l -T iqn.2004-04.com.qnap:someinfo.mylun.xxx

Code:
root@pve-down-1:~# iscsiadm -m session
tcp: [12] 192.168.10.100:3260,1 iqn.2004-04.com.qnap:ts-ec...
tcp: [13] 192.168.25.100:3260,1 iqn.2004-04.com.qnap:ts-ec...

Code:
/lib/udev/scsi_id -g -u -d /dev/sdc
36e843b670b15a22d9c34d4b43d8ad0d5
/lib/udev/scsi_id -g -u -d /dev/sdd
36e843b670b15a22d9c34d4b43d8ad0d5
multipath -a 36e843b670b15a22d9c34d4b43d8ad0d5
multipath -a 36e843b670b15a22d9c34d4b43d8ad0d5

Code:
nano /etc/multipath.conf

blacklist {
        wwid .*
}

blacklist_exceptions {
        wwid "36e843b670b15a22d9c34d4b43d8ad0d5
        wwid "36e843b670b15a22d9c34d4b43d8ad0d5"
}
defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    failover
        uid_attribute           ID_SERIAL
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}
multipaths {
        multipath {
                wwid                    36e843b670b15a22d9c34d4b43d8ad0d5
                alias                   qnap2
                path_selector           "round-robin 0"
                path_grouping_policy    failover
                rr_min_io               100
                prio                    iet
                prio_args               preferredip=192.168.25.100
        }
}

Code:
root@pve-down-1:~# multipath -ll
qnap2 (36e843b670b15a22d9c34d4b43d8ad0d5) dm-5 QNAP,iSCSI Storage
size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 18:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=50 status=enabled
  `- 19:0:0:0 sdd 8:48 active ready running

So looks like all is ok but when i doing test (
Code:
fio --filename=/dev/mapper/qnap2 --direct=1 --rw=read --bs=1m --size=5G --numjobs=200 --runtime=30 --group_reporting --name=file1
) and looking on interfaces with iptraf-ng i see what traffic goes through vmbr0 (192.168.10.0). I tried some other values on path_selector and/or path_grouping_policy but traffic still goes through vmbr0 OR through vmbr0 AND vlan9/vmbr40 at the same time.


I would appreciate if somebody will help to make multipath work ONLY through vlan9/vmbr40 interface with vmbr0 ONLY as failover network


p/s/
Tried with

Code:
prio                    weightedpath
                prio_args               19:0:0:0
and yes, after restarting iscsi/multipath (not really know whitch process exactly) now i have different priority:
Code:
qnap2 (36e843b68087d9acdadbdd4c3fd857dd4) dm-5 QNAP,iSCSI Storage
size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 19:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=50 status=enabled
  `- 18:0:0:0 sdd 8:48 active ready running
And traffic goes through vlan9/vmbr40 iface. But i don't think this is a good solution 'cause drive letters and hbtl can be changed (happened once).
So my question still actual -(
 
Last edited:
Please use CODE tags, the output is almost unreadable.

All storage system I worked with, you can set the path settings on the storage. I've never set it on the client. I have no other "tricks", than the one you already tried.
 
Please use CODE tags, the output is almost unreadable.
sorry,my bad, will use it next time

Didn't saw where i can choose a priority to each lun in my QNAP storage -/
Also about year and a half when i used Proxmox ip preferrence worked well. Also now i have an installation of Alt Linux Virtualization in test environment (Some fork? of Proxmox) and ip priority works well -/ So may be it's a bug?
 
Didn't saw where i can choose a priority to each lun in my QNAP storage -/
Nevermind, you have the correct value in your multipath.conf. Have you checked if the syntax of your file is correct? There seem to be a " missing. Please run multipath -ll and look for errors. In the current output, there is alua listed, yet this should be iet, doesn't it?
 
There seem to be a " missing
Where? if you mean it should be "preferredip=192.168.25.100" instead of preferredip=192.168.25.100-i tried both ways.

Checked in Alt Linux node and Proxmox 8 node: in both cases there is hwhandler='1 alua'
What i've done today:
installed multipath-tools on new (2nd) node right after installation and update
created /etc/multipath.conf with contents from 1st node but set preferredip instead weightedpath
Code:
blacklist {
        wwid .*
}

blacklist_exceptions {
        wwid "36e843b68087d9acdadbdd4c3fd857dd4"
}
defaults {
        polling_interval        2
#        path_selector           "round-robin 0"
#        path_grouping_policy    failover
        uid_attribute           ID_SERIAL
        failback                immediate
        find_multipaths_timeout 5
#        no_path_retry           queue
#        user_friendly_names     yes
}
multipaths {
        multipath {
                wwid                    36e843b68087d9acdadbdd4c3fd857dd4
                alias                   qnap2
                path_selector           "round-robin 0"
                path_grouping_policy    failover
                rr_min_io               1000
#                prio                    weightedpath
#                prio_args               18:0:0:0
               prio                    iet
               prio_args               preferredip=192.168.25.100
                no_path_retry           queue
                user_friendly_names     yes
        }
}
joined cluster to 1s node (unfortunately i can't remember strict sequence of manipulations)
Code:
root@pve-down-2:~# multipath -ll
root@pve-down-2:~# iscsiadm -m session
tcp: [1] 192.168.25.100:3260,1 iqn.2004-04.com.qnap:ts-ec2 (non-flash)
tcp: [2] 192.168.11.100:3260,1 iqn.2004-04.com.qnap:ts-ec2 (non-flash)

root@pve-down-2:~# /lib/udev/scsi_id -g -u -d /dev/sdc
36e843b68087d9acdadbdd4c3fd857dd4
root@pve-down-2:~# /lib/udev/scsi_id -g -u -d /dev/sdd
36e843b68087d9acdadbdd4c3fd857dd4
root@pve-down-2:~# multipath -a 36e843b68087d9acdadbdd4c3fd857dd4
wwid '36e843b68087d9acdadbdd4c3fd857dd4' added
root@pve-down-2:~# nano /etc/multipath.conf
root@pve-down-2:~# multipath reconfigure
root@pve-down-2:~# systemctl restart multipath-tools
root@pve-down-2:~# multipath -ll
qnap2 (36e843b68087d9acdadbdd4c3fd857dd4) dm-5 QNAP,iSCSI Storage
size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 18:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=50 status=enabled
  `- 19:0:0:0 sdd 8:48 active ready running
 
fio --filename=/dev/mapper/qnap2 --direct=1 --rw=read --bs=1m --size=5G --numjobs=200 --runtime=30 --group_reporting --name=file1
file1

multipath -ll shows 18:0:0:0 as active but traffic goes to right interface!
I had restarted multipath-tools couple of times and traffic still goes to right iface.

When i deleted LVM and iSCSI from cluster GUI, on a 1st node made apt purge multipath-tools apt purge open-iscsi and rm -rf /etc/multipath/ when installed open-iscsi and multipath-tools again, created /etc/multipath.conf with node 2 content (so configs now the same on both nodes)
Code:
root@pve-down-1:~# nano /etc/multipath.conf
root@pve-down-1:~# multipath -a 36e843b68087d9acdadbdd4c3fd857dd4
wwid '36e843b68087d9acdadbdd4c3fd857dd4' added
root@pve-down-1:~# multipath reconfigure
root@pve-down-1:~# multipathd reconfigure
ok
root@pve-down-1:~# systemctl restart multipath-tools
root@pve-down-1:~# multipath -ll
root@pve-down-1:~# iscsiadm -m session
iscsiadm: No active sessions.
root@pve-down-1:~# iscsiadm -m session
tcp: [13] 192.168.11.100:3260,1 iqn.2004-04.com.qnap:ts-ec (non-flash)
tcp: [14] 192.168.25.100:3260,1 iqn.2004-04.com.qnap:ts-ec (non-flash)
root@pve-down-1:~# multipath -ll
qnap2 (36e843b68087d9acdadbdd4c3fd857dd4) dm-5 QNAP,iSCSI Storage
size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 18:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=50 status=enabled
  `- 17:0:0:0 sdb 8:16 active ready running

Run test and traffic on 1st node went also from right interface.
So it's a kind of magic(c)
I also tried to purge these packets on 1st node without cluster-where was no changes. But with cluster and 2nd node multipath has been fixed...why?
From one side i happy what now it works as should. From other side-why it's not worked on 1st node and what shoud i do if this will happen again?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!