Hello everyone!
I recently setup freenas with NFS & Iscsi.
For the nfs part I have 10 * 500G in a raid10. Performance is pretty good.
For the iscsi setup I have 4 * 60G OCZ agility (brand new, latest firmware) on an ibm serveraid controller. I'm using them individually right now. The OS is freenas (freebsd...). I have configured the vm host with iscsi + lvm.
With NFS I can max out gigabit ethernet. Talking to a single SSD, I can't break 7MB/second..!! My test is a "qmrestore" from a backup from NFS to an "lvm" storage target that sits on top of iscsi.
The VM host has dual bonded gigabit ethernet.
The Freenas box has a single gigabit ethernet.
Any ideas on what I should be looking at?
An iostat -x <something> shows the iscsi lun as 100% utilized on the vm host.
I recently setup freenas with NFS & Iscsi.
For the nfs part I have 10 * 500G in a raid10. Performance is pretty good.
For the iscsi setup I have 4 * 60G OCZ agility (brand new, latest firmware) on an ibm serveraid controller. I'm using them individually right now. The OS is freenas (freebsd...). I have configured the vm host with iscsi + lvm.
With NFS I can max out gigabit ethernet. Talking to a single SSD, I can't break 7MB/second..!! My test is a "qmrestore" from a backup from NFS to an "lvm" storage target that sits on top of iscsi.
The VM host has dual bonded gigabit ethernet.
The Freenas box has a single gigabit ethernet.
Any ideas on what I should be looking at?
An iostat -x <something> shows the iscsi lun as 100% utilized on the vm host.