ISCSI performance test?

copymaster

Member
Nov 25, 2009
183
0
16
Hi.
Can someone please tell me how to make a performance test to a iscsi LUN?

I have a 4-node cluster running, connected to a ISCSI on a netapp. Above the ISCI there's a LVM.

All KVM machines reside on that Storage. But i only get a throughput of about 17MB/sec from within a virtual machine.

I now want to test the performance from a node directly.

i will make a testfile (1GB) and test the performance with

time cp testfile <netappmounted dir>

But where is that ISCSI LUN mounted??
with a simple fdisk -l i only get the virtual harddisks which are mounted as /dev/dm-xx

Is that the right way to test performance??

Please help

I just want to find out how long it takes to copy a testfile manually from the proxmox node to that iscsi lun
 
Hi.
Can someone please tell me how to make a performance test to a iscsi LUN?

I have a 4-node cluster running, connected to a ISCSI on a netapp. Above the ISCI there's a LVM.

All KVM machines reside on that Storage. But i only get a throughput of about 17MB/sec from within a virtual machine.

I now want to test the performance from a node directly.

i will make a testfile (1GB) and test the performance with

time cp testfile <netappmounted dir>

But where is that ISCSI LUN mounted??
with a simple fdisk -l i only get the virtual harddisks which are mounted as /dev/dm-xx

Is that the right way to test performance??

Please help

I just want to find out how long it takes to copy a testfile manually from the proxmox node to that iscsi lun
Hi,
create a logical volume on the volumegroup which are on the iscsi-netapp, format the lv and mount it. Then you can test.
E.g.:
Code:
lvcreate -L 20G -n testvol /dev/iscsivg
mkfs.ext3 /dev/iscsivg/testvol
mount /dev/iscsivg/testvol /mnt
pveperf /mnt
Further test with bonnie++ and dd can used for write-performance.

Please report your values (i only test iscsi, because of the performance...).

Udo
 
Thank you, udo for the reply.

that is my problem, i can not find the Devicenode of the mounted ISCSI.

All KVM machines are working, but i don't know how to find the mountpoint.

a mount shows

/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)

But there's no iscsi mountpoint???

And: I just need to test performance can you please give advice how i can find the iscsi mountpoint?

I use ISCSI with LVM and in a kvm config, theres the line

ide0: LVM1iscsi1:vm-101-disk-1

and the

/etc/iscsi/nodes/iqn.1992-08.com.netapp:01.896ebee516/172.16.0.5,3260,2000

shows:

node.name = iqn.1992-08.com.netapp:01.896ebee516
node.tpgt = 2000
node.startup = manual
iface.hwaddress = default
iface.iscsi_ifacename = default
iface.net_ifacename = default
iface.transport_name = tcp
node.discovery_address = 172.16.0.5
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.auth.authmethod = None
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 20
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 172.16.0.5
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.DataDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No



Under /dev i have many dm-xx entries, which seem to be the virtual disks

Thank you
 
Hi,
of course you can't see a mounted partition. That's why i wrote the example to create and mount one!

To see the volumegroup try vgdisplay - it must be the same like the vg defined in /etc/pve/storage.cfg.


Udo
 
Thank you

Now i understand the procedure.

Maybe you can now solve my problem:

I have many KVM machines (W2k3) , all on ISCSI Store. I tested the virtual hdds with HD-TUNE from within windows and it gave me an average performance of 17MB/sec. Thats why i wanted to test the direct connection from a proxmox node to the ISCSI store.

With your help i could make a LVM Volume. I mounted it and tested with pveperf.

That gave me a value of about 10,63 MB which is real bad.

After that i created a testfile (1GB) and coped it over to the mounted storage. That took 19.6 Seconds. That is a value of 52MB/sec.

Another test from one KVM-machine to another with NETIO gave me values between 65-90 MB/sec.

now the 100.000,00 $ Question:

WHY?

pveperf says 10,63 MB/sec
copy testfile 52,00 MB/Sec

HD-TUNE (within KVM machine) 17MB/sec (even with virtio storage driver)
NETIO (within KVM machine) 65-90 MB/sec

A simple copyaction from one KVM to a CIFS Share runs also with about 17MB/sec

how can i improve performance of the KVM machines? I already tried virtio Networkdrivers and the e1000 one
 
Thank you

Now i understand the procedure.

Maybe you can now solve my problem:

I have many KVM machines (W2k3) , all on ISCSI Store. I tested the virtual hdds with HD-TUNE from within windows and it gave me an average performance of 17MB/sec. Thats why i wanted to test the direct connection from a proxmox node to the ISCSI store.

With your help i could make a LVM Volume. I mounted it and tested with pveperf.

That gave me a value of about 10,63 MB which is real bad.

After that i created a testfile (1GB) and coped it over to the mounted storage. That took 19.6 Seconds. That is a value of 52MB/sec.
Hi,
if you simply copy, then the 52MB/s are wrong - linux use the memory for caching. You must take the time for "cp x /mnt; sync" on a calm machine.
Another test from one KVM-machine to another with NETIO gave me values between 65-90 MB/sec.

now the 100.000,00 $ Question:

WHY?

pveperf says 10,63 MB/sec
copy testfile 52,00 MB/Sec
like wrote before - caching
how can i improve performance of the KVM machines? I already tried virtio Networkdrivers and the e1000 one
virtio is for better performance from VM to host - your bottleneck is from host to storage.
I'm no iscsi-expert, but you can look for:
1. NIC only for iscsi? You should use a seperate NIC.
2. Perhaps bonded (trunking) - eg. more than one NIC?
3. MTU increasing to jumbo frame (9000)?!
4. Better switch, or perhaps crossovercable?

Due to performance reason i use FC (i'm lucky, that we have some fc-raids).

Udo
 
Hi Udo,

Well i did as you said a

time cp testfile /mnt;sync

and this time it gave me an astonishing 7,35 sec which seems to be 142 MB/sec !


And:

I already have an dedicated nic for ISCSI (via vlan tagging) and the Netapp also has one dedicated nic in this vlan

i really do not know what to test next.

The ISCSI LUN is on a Netapp shelf build of 14 SATA Disks a 512 GB

i will create a cifs share on that Netapp and mount it and test the copy action again.

Maybe the disks are the bottleneck

If someone has another idea, i would like to hear from you!

Thank you
 
ok... :(

Now it looks like that:

CPU BOGOMIPS: 76609.04
REGEX/SECOND: 796208
HD SIZE: 19.69 GB (/dev/mapper/pve1-testvol)
BUFFERED READS: 13.04 MB/sec
AVERAGE SEEK TIME: 13.04 ms
FSYNCS/SECOND: 275.07
DNS EXT: 58.31 ms
DNS INT: 0.72 ms


and the copy command is
real 1m23.072s
user 0m0.028s
sys 0m2.184s

which is 12 MB/sec

But i am glad i could learn a lot from your answers! Thank you Udo

In the end i don't know where to find the error. It seems to be the LAN or what do you think?

The HArdware is

Netapp<---->switch<--->Proxmoxnode
1GBIT 1GBIT
 
Hi,
to found the bottleneck (i think not error) do you have a chance to use a direct connection? And change the mtu (e.g. "ifconfig eth1 mtu 9000" - back with "ifconfig eth1 mtu 1500")?

Perhaps some else can report transfervalues with netapps...

I think the bottleneck is on the network side, but if the netapp are busy it's perhaps due to the many IOs. To speed up the raid, you need the right raid-level (raid-10 is much better then raid-5).
I don't know how good are the raidcontroller are which netapp use.

Udo
 
3. MTU increasing to jumbo frame (9000)?!

Excuse me, but the 2.6.32 kernel has a regression with enabling jumbo frames with e1000e module :(
I can't set MTU size larger than 1500 - "SIOCSIFMTU: Invalid argument".
Do you know how to fix it with 2.6.32-4-pve kernel?
 
did you try the latest kernel from pvetest?
 
I use dd to test iscsi performance on the guest VM, not sure how accurate this would be, but I thought I would share.

PVE Host:
PVE 1.7 on Dell PE 1950
1 Dedicate 1Gbit nic
Jumbo Frames @9000
iscsi with LVM

SAN:
OpenIndana(Solaris) on Dell PE 2950
1 Dedicate 1Gbit nic
Jumbo Frames @9000
ZFS with compression and dedups on
COMSTAR iscsi

VMguest:
Single VM
Centos 5.5 32 bit
virtio disk driver on the guest

READ test Disk to nothing pure read from disk
dd of=/dev/null if=/dev/vda bs=128K count=100k
102400+0 records in
102400+0 records out
13421772800 bytes (13 GB) copied, 108.615 seconds, 124 MB/s

WRITE test from Memory to Disk
dd if=/dev/zero of=/tmp/13GBfile bs=128k count=100K 102400+0 records in
102400+0 records out
13421772800 bytes (13 GB) copied, 238.07 seconds, 56.4 MB/s

READ WRITE test Disk to Disk
dd of=/tmp/13Gfile if=/dev/vda bs=128k count=100K
102400+0 records in
102400+0 records out
13421772800 bytes (13 GB) copied, 401.174 seconds, 33.5 MB/s
 
I use dd to test iscsi performance on the guest VM, not sure how accurate this would be, but I thought I would share.

...
WRITE test from Memory to Disk
dd if=/dev/zero of=/tmp/13GBfile bs=128k count=100K 102400+0 records in
102400+0 records out
13421772800 bytes (13 GB) copied, 238.07 seconds, 56.4 MB/s
...
Hi,
the write speed is wrong due to buffering. If you use "dd if=/dev/zero of=/tmp/13GBfile bs=128k count=100K conv=fdatasync" you will get "better" values.

Udo
 
Thanks Udo,

I reran the test with your suggestion with/without the conv=fdatasync a few times and with smaller size.

What do you mean by your statement "you will get "better" values.", better performance or more accurate?

dd if=/dev/zero of=/tmp/13GBfile bs=128k count=100K conv=fdatasync
102400+0 records in
102400+0 records out
13421772800 bytes (13 GB) copied, 261.719 seconds, 51.3 MB/s

[root@dhcp-10-1-1-204 tmp]# dd if=/dev/zero of=/tmp/13GBfile bs=128k count=100K
102400+0 records in
102400+0 records out
13421772800 bytes (13 GB) copied, 228.874 seconds, 58.6 MB/s

Smaller size
[root@dhcp-10-1-1-204 tmp]# dd if=/dev/zero of=/tmp/1.3GBfile bs=128k count=10K conv=fdatasync
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 21.4202 seconds, 62.7 MB/s
[root@dhcp-10-1-1-204 tmp]# dd if=/dev/zero of=/tmp/1.3GBfile bs=128k count=10K
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 17.365 seconds, 77.3 MB/s

Paul...
 
Thanks Udo,

I reran the test with your suggestion with/without the conv=fdatasync a few times and with smaller size.

What do you mean by your statement "you will get "better" values.", better performance or more accurate? ...
Hi,
i mean more accurate. Your values differs not much. Since you test from the vm, i guess the host is also caching? Or do you have the parameter "cache=none" at the device-entry of the VM?
To test the speed of the iscsi-raid it would be helpfull if you create an lv on the iscsi-vg, make a filesystem on that and mount it on the host.
The speed there is without impact from kvm.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!