I/O perf problem with LSI MegaRAID SAS 9261-8i

gregober

New Member
Apr 20, 2012
3
0
1
Hello folks,


We are facing some very problematic I/O perf issues with the LSI MegaRAID SAS 9261-8i
It has been configured as a RAID-6 and we have the following perf (from the ProxMox master host)

Code:
[FONT=Helvetica]proxmaster:~$ dd if=/dev/urandom of=test bs=1024k count=100[/FONT]
[FONT=Helvetica]100+0 records in[/FONT]
[FONT=Helvetica]100+0 records out[/FONT]
[FONT=Helvetica]104857600 bytes (105 MB) copied, 15.4111 s, 6.8 MB/s[/FONT]


This is two times less than with another host using a very similar configuration with older hardware.


FW Package Build: 12.7.0-0007

BIOS Version : 3.13.00
FW Version : 2.70.03-0862

ProxMow host is configured using ext3 FS.

Nothing special beside that.


Any advice, or pointer will be welcome.
 
Hello folks,


We are facing some very problematic I/O perf issues with the LSI MegaRAID SAS 9261-8i
It has been configured as a RAID-6 and we have the following perf (from the ProxMox master host)

Code:
[FONT=Helvetica]proxmaster:~$ dd if=/dev/urandom of=test bs=1024k count=100[/FONT]
[FONT=Helvetica]100+0 records in[/FONT]
[FONT=Helvetica]100+0 records out[/FONT]
[FONT=Helvetica]104857600 bytes (105 MB) copied, 15.4111 s, 6.8 MB/s[/FONT]


This is two times less than with another host using a very similar configuration with older hardware.


FW Package Build: 12.7.0-0007

BIOS Version : 3.13.00
FW Version : 2.70.03-0862

ProxMow host is configured using ext3 FS.

Nothing special beside that.


Any advice, or pointer will be welcome.
Hi,
/dev/urandom is not the right input for testing the output speed.

Its better to use /dev/zero (except you test SSDs with an controller which are using compression), or you copy the file first to /tmp, so the file is cached into the ram.

Use also sync for writing. See here the difference:
Code:
root@proxmox:/var/lib/vz# dd if=/dev/urandom of=test bs=1024k count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 9.42809 s, 11.1 MB/s
root@proxmox:/var/lib/vz# dd if=/dev/urandom of=/tmp/test bs=1024k count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 9.39765 s, 11.2 MB/s
root@proxmox:/var/lib/vz# dd if=/tmp/test of=test bs=1024k count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.405884 s, 258 MB/s
root@proxmox:/var/lib/vz# dd if=/tmp/test of=test bs=1024k count=100 conv=fdatasync
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.963038 s, 109 MB/s
BTW, this is an areca raidcontroller - I don't like LSI-Controller (crapware).

Udo
 
Code:
root@proxmox:/var/lib/vz# dd if=/dev/urandom of=test bs=1024k count=100
BTW, 100M is also not to much to test the write speed.

I use normaly an filesize of 8GB (much more than the cache of the raid-controller):
Code:
root@proxmox:/var/lib/vz# dd if=/dev/urandom of=/tmp/test bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 778.981 s, 11.0 MB/s

root@proxmox:/var/lib/vz# dd if=/tmp/test of=test bs=1M conv=fdatasync
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 68.9971 s, 124 MB/s
You see, you can also use /dev/zero for input:
Code:
root@proxmox:/var/lib/vz# dd if=/dev/zero of=test bs=1M count=8192 conv=fdatasync
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 73.0978 s, 118 MB/s
Udo
 
Could you please run a fio test suite to get "real" performance. dd is not an I/O performance tool.
We have a 9271-8i with 6 Enterprise SSDs and get combined 1GB/sec in read and write performance for a single thread operation. Multithread read at about 2.5 GB/sec. Whole machine is not Proxmox, but in Debian Wheezy.

Are you running a WriteBack cache setup? Is the BBU or capacitor OK? What is the Disk Cache setting?