iSCSI Multipath Prio

cpzengel

Renowned Member
Nov 12, 2015
221
27
93
Aschaffenburg, Germany
zfs.rocks
Hi

my stupid Readydata delivers Multipath on iSCSI and prefers the 1GBit
How can I bind my iSCSI give priorty to my 10Gbit Interface.
Even when I set the Portal to the 10Gbit it uses the LAN Interface.
Its not an Option to change on SAN

Readydata deserved to die!

Cheers
 
routing is fine
the problem is that you add the portal and not the targets or paths itself
I added the Portal with the SAN IP.
If I scan the SAN IP it delivers all Paths.
Problem is that the 192.x Path is used exclusively

# pvesm iscsiscan -portal 172.16.1.249

iqn.1994-11.com.netgear:rd249:244f7523:group1 172.16.1.249:3260,192.168.14.249:3260,5.178.40.171:3260

Even the 172.x IP is at the beginning, the 192.x one is bein used.
I just dont want to use complexibility of Multipath Tools, just want to prefer the 172.x 10Gbit Line
 
Update:
The wanted Solution ist to offline the unwanted Drive

  • echo offline > /sys/block/sdc/device/state
  • echo 1 > /sys/block/sdc/device/delete


Those Commands will offline end remove the unwanted iSCSI Path
The Aim is to permanently set this Behavior or setup a Active/Passive Constellation that prefers the 10Gbit Path
Any help?
 
So, your setup is not a separated setup if you use IPs that are reachable over each NIC.

Why don't you use the same NIC/Bond of NICs for all multipath IPs?
 
routing is fine
the problem is that you add the portal and not the targets or paths itself
I added the Portal with the SAN IP.
If I scan the SAN IP it delivers all Paths.
Problem is that the 192.x Path is used exclusively

# pvesm iscsiscan -portal 172.16.1.249

iqn.1994-11.com.netgear:rd249:244f7523:group1 172.16.1.249:3260,192.168.14.249:3260,5.178.40.171:3260

Even the 172.x IP is at the beginning, the 192.x one is bein used.
I just dont want to use complexibility of Multipath Tools, just want to prefer the 172.x 10Gbit Line

as you can see the portal delivers all routes to available paths, but no prios
 
to make clear again. both lans are seperated by a correct netmask. but iSCSI Initiator of Proxmox prefers the obvios correct slow Path.

Yes, I can see that but in this case you do not have separated networks ... more precisely separated in non-san and san traffic. You route two different subnets over two BOND/NICs to the same destination. This is not storage separated networks. Separate it properly, and you do not have any problems.

What is the benefit from using three subnets for your storage? I've never seen this in production. All then SANs I ever saw have multiple IPs in the same range or a high available IP to connect to. On the client, you have two dedicated NICs or one bonded one to connect to the san. Similar to the setup you would have with a FC-based SAN.
 
Hello,

On PMX, can you show the output from this:

Code:
multipath -ll

It must show something like this:
Code:
# multipath -ll
mpath0 (449455400000000006b756e30000000000000000000000000) dm-2 IET,VIRTUAL-DISK
size=50G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 4:0:0:0 sda 8:0   active ready  running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:0 sdb 8:16  active ready  running
 
thanks for helping


  • root@pve48:~# multipath -ll

    3600144f06892c77f00005ae1e86d0001 dm-5 NETGEAR,ReadyDATA

    size=110G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw

    `-+- policy='service-time 0' prio=50 status=active

    `- 13:0:0:0 sdd 8:48 active ready running

    3600144f06ed1d58e00005ae33a220001 dm-9 NETGEAR,ReadyDATA

    size=100G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw

    |-+- policy='service-time 0' prio=50 status=enabled

    | `- 14:0:0:0 sdf 8:80 active ready running

    `-+- policy='service-time 0' prio=50 status=active

    `- 11:0:0:0 sdg 8:96 active ready running

    3600144f06892c77f00005ae318ab0002 dm-7 NETGEAR,ReadyDATA

    size=200G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw

    |-+- policy='service-time 0' prio=50 status=active

    | `- 12:0:0:1 sdb 8:16 active ready running

    `-+- policy='service-time 0' prio=50 status=enabled

    `- 13:0:0:1 sde 8:64 active ready running
for exampe

echo offline > /sys/block/sdc/device/state oder running

was bringing temporary help on first lun
 
thanks for helping


  • root@pve48:~# multipath -ll

    3600144f06892c77f00005ae1e86d0001 dm-5 NETGEAR,ReadyDATA

    size=110G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw

    `-+- policy='service-time 0' prio=50 status=active

    `- 13:0:0:0 sdd 8:48 active ready running

    3600144f06ed1d58e00005ae33a220001 dm-9 NETGEAR,ReadyDATA

    size=100G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw

    |-+- policy='service-time 0' prio=50 status=enabled

    | `- 14:0:0:0 sdf 8:80 active ready running

    `-+- policy='service-time 0' prio=50 status=active

    `- 11:0:0:0 sdg 8:96 active ready running

    3600144f06892c77f00005ae318ab0002 dm-7 NETGEAR,ReadyDATA

    size=200G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw

    |-+- policy='service-time 0' prio=50 status=active

    | `- 12:0:0:1 sdb 8:16 active ready running

    `-+- policy='service-time 0' prio=50 status=enabled

    `- 13:0:0:1 sde 8:64 active ready running
for exampe

echo offline > /sys/block/sdc/device/state oder running

was bringing temporary help on first lun


As I can see, you do not have a multy-path iscsi. A multy-path iscsi could be done (this is my own understanding) / can function if the both network path have the same characteristics (link speed, mtu, etc). It is not possible to use a network path with mtu=9000 and then to use the second path with 1500.
Another note, on your iscsi server on any path you have the same prio, so I think is load-balancing. How you can send / recive from your iscsi client (pmx) one stream with 1500 mtu and to recive an 9000 mtu stream ?
Maybe the Wright decision is to create on your iscsi server a multi-path with fail-over config.
Take all I wrote as a guess (intuition ) because I do not have use multi-path iscsi in production (only I make some tests to understand the basics).
If you can, I think you can solve your problem using a iscsi proxy (any tcp proxy like haproxy can be used, with a fail-over back-ends=your iscsi IPs)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!