Iscsi performance problem.

Robstarusa

Renowned Member
Feb 19, 2009
89
4
73
Hello everyone!

I recently setup freenas with NFS & Iscsi.
For the nfs part I have 10 * 500G in a raid10. Performance is pretty good.

For the iscsi setup I have 4 * 60G OCZ agility (brand new, latest firmware) on an ibm serveraid controller. I'm using them individually right now. The OS is freenas (freebsd...). I have configured the vm host with iscsi + lvm.

With NFS I can max out gigabit ethernet. Talking to a single SSD, I can't break 7MB/second..!! My test is a "qmrestore" from a backup from NFS to an "lvm" storage target that sits on top of iscsi.

The VM host has dual bonded gigabit ethernet.
The Freenas box has a single gigabit ethernet.

Any ideas on what I should be looking at?

An iostat -x <something> shows the iscsi lun as 100% utilized on the vm host.
 
Hi,
do you use the same lan for normal networking and iscsi?. Perhaps it's better to cancel the bond and use one nic for the network and one nic for iscsi-traffic.
For Iscsi it's goot to use a bigger mtu (9000).
Look before for the network-performance with iperf - try with both bond-paths and perhaps only with one path. Can you change the switch between proxmox and freenas? Or use for testing a direct cable?

Udo
 
Hi,
do you use the same lan for normal networking and iscsi?. Perhaps it's better to cancel the bond and use one nic for the network and one nic for iscsi-traffic.
For Iscsi it's goot to use a bigger mtu (9000).
Look before for the network-performance with iperf - try with both bond-paths and perhaps only with one path. Can you change the switch between proxmox and freenas? Or use for testing a direct cable?

Udo

The MTU900 isn't always a good idea...i had perfs troubles because of this MTU9000...

after having disabled it (on switches, NIC and so on) everything ran fine...