iSCSI Performance tests

try with 2 subnets ;)

i'm pretty sure your packets always going out through eth1.

Hiya,

So I should create a new subnet with my secondary switch and configure the secondary NIC on the SAN to talk to that subnet plus the secondary NIC on the Proxmox host?

Chris.
 
on your san :
first network card : 192.168.20.11 /24
second network card : 192.168.21.11 /24

on your proxmox:
eth1 192.168.20.120 / 24
eth3 192.168.21.120 / 24


you can use 1 switch, or 2 switchs, the result is the same.

i don't know how works equalogic sans,

but with my nexenta (opensolaris) san,i have only 1 portal and iscsiadm discovery return the others paths

# iscsiadm --mode discovery --type sendtargets --portal 10.5.0.18
10.5.0.18:3260,3 iqn.1986-03.com.sun:02:316dd6a9-76bc-62ea-93fa-d0140e876a4b
10.6.0.18:3260,2 iqn.1986-03.com.sun:02:316dd6a9-76bc-62ea-93fa-d0140e876a4b
 
on your san :
first network card : 192.168.20.11 /24
second network card : 192.168.21.11 /24

on your proxmox:
eth1 192.168.20.120 / 24
eth3 192.168.21.120 / 24


you can use 1 switch, or 2 switchs, the result is the same.

i don't know how works equalogic sans,

but with my nexenta (opensolaris) san,i have only 1 portal and iscsiadm discovery return the others paths

# iscsiadm --mode discovery --type sendtargets --portal 10.5.0.18
10.5.0.18:3260,3 iqn.1986-03.com.sun:02:316dd6a9-76bc-62ea-93fa-d0140e876a4b
10.6.0.18:3260,2 iqn.1986-03.com.sun:02:316dd6a9-76bc-62ea-93fa-d0140e876a4b

Ok, many thanks for this - it sounds like Out of Hours work to me ;)

The only reason I mention the switch is that it has an IP address of 192.168.20.16 so I don't see how it will route 192.168.21.x addresses.

Cheers,
Chris.
 
your switch ip is for administration only isn't it ?

switching is layer 2, so it doesnt route ....
(think of a non-administrable switch without ip)

No, I have a separate network for administration.

I have two Cisco switches both set to the 192.168.20.0/24 subnet. Connected with an LACP aggregation link (4 ports). I followed the best practices Equallogic doc when setting this up.

I think if I want to set up like you suggest I will need to remove the LACP change on of the switches to 192.168.21.16 and plug all .21 traffic into that. Right?

Chris.
 
No, I have a separate network for administration.

I have two Cisco switches both set to the 192.168.20.0/24 subnet. Connected with an LACP aggregation link (4 ports). I followed the best practices Equallogic doc when setting this up.

I think if I want to set up like you suggest I will need to remove the LACP change on of the switches to 192.168.21.16 and plug all .21 traffic into that. Right?

Chris.

Sorry, have just re-read your post regarding the point about administration.

I have this entry on my Cisco:

interface Vlan1
ip address 192.168.20.16 255.255.255.0

Does that not only allow .20 traffic?

Chris.
 
no,
it's just an ip for switch administration.

on my cisco switch (cisco 2960), i have 1 interface vlan with subnet for administration, and many (+- 25 ) vlans/subnet on the same switch.

Ok, will give this config a try and report back.

Many thanks for all your help.

Chris.
 
Keep in mind that in general, multiple IP NICs on the same subnet is bad, as outbound traffic will usually just take one path.
 
Interesting. This example show two interfaces on the same subnet. In theory, could you add more than two NIC's?

Chris.

Yes you can keep adding NICs till you max out the SAN, in the past I've used quad port pci-e gigabit cards with good results.
 
Yes you can keep adding NICs till you max out the SAN, in the past I've used quad port pci-e gigabit cards with good results.

Wow, this looks like the way to go then. Thanks for the tip. Am doing a test implementation of this on Sunday so will report back any success/failure.

Chris.
 
Yes you can keep adding NICs till you max out the SAN, in the past I've used quad port pci-e gigabit cards with good results.

One last question: Do you know if the kernel flags mentioned in the web page are enabled in the Proxmox build? I've searched and can't seem to find this one:

CONFIG_IP_ROUTE_FWMARK=y

The others appear to be there.

Chris.
 
Update.

My /etc/network/interfaces now looks like this:

auto lo
iface lo inet loopback

auto eth1
iface eth1 inet static
address 192.168.20.120
netmask 255.255.255.0
network 192.168.20.0
broadcast 192.168.20.255
up ip route add 192.168.20.0/24 dev eth1 proto kernel scope link src 192.168.20.120 table 20
up ip route add default via 192.168.20.1 dev eth1 table 20
up ip rule add from 192.168.20.120 lookup 20
mtu 9000

auto eth2
iface eth2 inet static
address 192.168.20.121
netmask 255.255.255.0
network 192.168.20.0
broadcast 192.168.20.255
up ip route add 192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.121 table 25
up ip route add default via 192.168.20.1 dev eth1 table 25
up ip rule add from 192.168.20.121 lookup 25
mtu 9000


auto eth3
iface eth3 inet static
address 192.168.20.122
netmask 255.255.255.0
network 192.168.20.0
broadcast 192.168.20.255
up ip route add 192.168.20.0/24 dev eth3 proto kernel scope link src 192.168.20.122 table 30
up ip route add default via 192.168.20.1 dev eth3 table 30
up ip rule add from 192.168.20.122 lookup 30
mtu 9000


auto vmbr0
iface vmbr0 inet static
address 192.168.16.253
netmask 255.255.252.0
gateway 192.168.16.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

multipath -ll returns this:

36090a068109c09e726d9b48f000020e6dm-19 EQLOGIC ,100E-00
[size=100G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 23:0:0:0 sdt 65:48 [active][ready]
\_ 8:0:0:0 sdh 8:112 [active][ready]
\_ 24:0:0:0 sdv 65:80 [active][ready]
36090a068109cd9ce8fc0d42900008022dm-22 EQLOGIC ,100E-00
[size=50G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 29:0:0:0 sdad 65:208 [active][ready]
\_ 11:0:0:0 sdo 8:224 [active][ready]
\_ 30:0:0:0 sdab 65:176 [active][ready]
36090a068109c591a81bc84010000a0fedm-18 EQLOGIC ,100E-00
[size=500G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 38:0:0:0 sdaj 66:48 [active][ready]
\_ 15:0:0:0 sdj 8:144 [active][ready]
\_ 37:0:0:0 sdaa 65:160 [active][ready]
36090a068109ca9e9c0bed41b00008052dm-16 EQLOGIC ,100E-00
[size=1.1T][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 32:0:0:0 sdah 66:16 [active][ready]
\_ 12:0:0:0 sdm 8:192 [active][ready]
\_ 31:0:0:0 sdx 65:112 [active][ready]
36090a068109c39b0d7cd4461000040cbdm-13 EQLOGIC ,100E-00
[size=100G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 36:0:0:0 sdac 65:192 [active][ready]
\_ 14:0:0:0 sdi 8:128 [active][ready]
\_ 35:0:0:0 sdaf 65:240 [active][ready]
3600508e000000000725269895cadb101dm-0 Dell ,VIRTUAL DISK
[size=136G][features=0][hwhandler=0]
\_ round-robin 0 [prio=1][active]
\_ 4:1:0:0 sda 8:0 [active][ready]
36090a068109c59a71fd9348f00004093dm-8 EQLOGIC ,100E-00
[size=60G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 34:0:0:0 sdag 66:0 [active][ready]
\_ 13:0:0:0 sdf 8:80 [active][ready]
\_ 33:0:0:0 sdy 65:128 [active][ready]
36090a068109c09cc8fc0a429000000a7dm-14 EQLOGIC ,100E-00
[size=800G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 26:0:0:0 sdae 65:224 [active][ready]
\_ 9:0:0:0 sdl 8:176 [active][ready]
\_ 25:0:0:0 sdu 65:64 [active][ready]
36090a068109c59e992d604840000a07cdm-7 EQLOGIC ,100E-00
[size=850G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 20:0:0:0 sdq 65:0 [active][ready]
\_ 6:0:0:0 sdc 8:32 [active][ready]
\_ 19:0:0:0 sdr 65:16 [active][ready]
36090a068109c79ce63c254330000e040dm-20 EQLOGIC ,100E-00
[size=30G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 28:0:0:0 sdai 66:32 [active][ready]
\_ 10:0:0:0 sdn 8:208 [active][ready]
\_ 27:0:0:0 sdw 65:96 [active][ready]
36090a068109c390c5ad9a49000006004dm-15 EQLOGIC ,100E-00
[size=30G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 39:0:0:0 sdz 65:144 [active][ready]
\_ 16:0:0:0 sdk 8:160 [active][ready]
\_ 40:0:0:0 sdak 66:64 [active][ready]
36090a068109c999bded7248a000000d1dm-6 EQLOGIC ,100E-00
[size=210G][features=0][hwhandler=0]
\_ round-robin 0 [prio=3][active]
\_ 18:0:0:0 sds 65:32 [active][ready]
\_ 5:0:0:0 sdb 8:16 [active][ready]
\_ 17:0:0:0 sdp 8:240 [active][ready]

Whilst doing a dd if=/dev/zero of=/data/13GBfile bs=128k count=100K conv=fdatasync I now see all three NICs (eth1, eth2 and eth3) being used on atop.

The most I've been able to get is ~150MB/sec which is an improvement but not the blistering pace I was hoping for.

My multipath.conf now looks like this:

devices {
device {
vendor "EQLOGIC"
product "100E-00"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
features "1 queue_if_no_path"
path_checker readsector0
failback immediate
no_path_retry fail
path_selector "round-robin 0"
rr_min_io 10
rr_weight priorities
}
}

I've also updated to 1.8 from 1.7 and ran tests on both releases with similar speed results.

Chris.
 
Update.

Have rolled out the changes across my hosts and all seems to be working ok.

This is my current multipath.conf . I found I needed to add individual multpath entries for each of my LUN's. Remember this is for a Dell Equallogic PS4000XV:

(I have commented out the wwid of my disk as your particular system will be different - this is the local disk that Proxmox resides on. Also all the multipath entries are particular to my system, yours will be different.)

blacklist {
# wwid 3600508e000000000725269895cadb101
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^sda"
devnode "^sda[0-9]"
}
devices {
device {
vendor "EQLOGIC"
product "100E-00"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id --whitelisted /dev/%n"
features "1 queue_if_no_path"
path_checker readsector0
failback immediate
no_path_retry 5
path_selector "round-robin 0"
rr_min_io 8
rr_weight priorities
}
}
multipaths
multipath {
wwid 36090a068109ca9e9c0bed41b00008052
alias fileserver_drive
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}
multipath {
wwid 36090a068109c591a81bc84010000a0fe
alias VMDATA
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}
multipath {
wwid 36090a068109c79ce63c254330000e040
alias ess-mis_OS
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}
multipath {
wwid 36090a068109c09cc8fc0a429000000a7
alias misDATA
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}
multipath {
wwid 36090a068109c59e992d604840000a07c
alias misOTHER2
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}
multipath {
wwid 36090a068109cd9ce8fc0d42900008022
alias misHOME
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}
multipath {
wwid 36090a068109c59a71fd9348f00004093
alias ess-comms-001
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}
multipath {
wwid 36090a068109c999bded7248a000000d1
alias ess-gimp
path_grouping_policy multibus
path_checker readsector 0
features "1 queue_if_no_path"
path_selector "round-robin 0"
failback immediate
rr_weight priorities
no_path_retry 5
rr_min_io 10
}

}

# end

Once this was in place I just had to rediscover my targets and up they came. Word of warning, do not do:

iscsiadm -m node -l

This is supposed to log in all your targets but generally doesn't work.

Do:

iscsiadm -m node --targetname "iqn.2001-05.com.equallogic:0-8a0906-e9a99c106-528000001bd4bec0-dataserver" --portal "192.168.20.11:3260" --login

Obviously, replacing your targetname with your LUN.

All my VM's are showing improvements.

My next challenge is to add multipathing for the guests that use ISCSI LUN's directly from the SAN (i.e. they have disk entries in their VMID.conf files like: virtio1 /dev/sdz,cache=none). I guess I will need to add multipath-tools on the guest? Anyone had any experience with that?

EDIT: was being a bit dim here, you just need to add /dev/dm-XX to the VMID.conf file. You can find the appropriate dm number by doing a "multipath -ll", the dm will be at the end of the wwid for each LUN.

Thanks for everyone's help on this.

Chris.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!