OCZ Z-Drive R4 vs Adaptec 6805 with 8 Vertex3 Max IOPs on Intel Xeon X5650

vermagic: 2.6.18-238.19.1.el5

that's the RHEL5 driver, you won't get this working with a RHEL6 kernel...
i'd try the RHEL6 driver... (though there's no guarantee, since proxmox is a debian-based distri running on a RHHEL6-based kernel...)
 
I also tried with their RedHat 6 drive and I get the same error as follows:

...

proxmox:~# isnmod ocz10xx.ko -bash: isnmod: command not found

...that's not *exactly* the "same error", since i doubt that the command is named "isnmod" on your system... ;-)
 
This was just a typo. I get the exact same error from the RH 6 driver.

proxmox:~# tar -zxvf OCZ\ RHEL-CentOS_6.x_64-bit.tar.gz
ocz10xx.ko
proxmox:~# insmod ocz10xx.ko
insmod: error inserting 'ocz10xx.ko': -1 Invalid module format
 
I get the exact same error from the RH 6 driver.

proxmox:~# tar -zxvf OCZ\ RHEL-CentOS_6.x_64-bit.tar.gz
ocz10xx.ko
proxmox:~# insmod ocz10xx.ko
insmod: error inserting 'ocz10xx.ko': -1 Invalid module format
 
I figured out the issue with the adapter card.

My adapter was not booting because I had two cards plugged into my pci ports, the adapter and the z-drive and the system would only boot on or the other, most often the z-drive.

After changing the following settings in the BIOS, the card now always boot fine.

Bios > Advanced > PCI/PnP Configuration

I set Slot J2 PCIe Width to [Force x8/x8]


See attached screenshot.

bios settings.jpg

 
While i am still trying to troubleshoot the performance issue of my 6805, I can report a slight improvement of performance from about 550MB/s to about 650MB/s after enabling the "Background consistency check" on the Raid controller from the Storage Manager Interview. Here is my new pveperf:

proxmox:~# pveperf
CPU BOGOMIPS: 128040.65
REGEX/SECOND: 983194
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 657.43 MB/sec
AVERAGE SEEK TIME: 0.38 ms
FSYNCS/SECOND: 1721.27
DNS EXT: 50.69 ms
DNS INT: 2.75 ms (server.local)


proxmox:~# pveperf
CPU BOGOMIPS: 128040.65
REGEX/SECOND: 973897
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 651.31 MB/sec
AVERAGE SEEK TIME: 0.37 ms
FSYNCS/SECOND: 1863.07
DNS EXT: 50.50 ms
DNS INT: 2.70 ms (server.local)


proxmox:~# pveperf
CPU BOGOMIPS: 128040.65
REGEX/SECOND: 974098
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 682.35 MB/sec
AVERAGE SEEK TIME: 0.35 ms
FSYNCS/SECOND: 1867.42
DNS EXT: 150.34 ms
DNS INT: 2.63 ms (server.local)


proxmox:~# pveperf
CPU BOGOMIPS: 128040.65
^[[A
REGEX/SECOND: 975574
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 687.65 MB/sec
AVERAGE SEEK TIME: 0.35 ms
FSYNCS/SECOND: 1862.97
DNS EXT: 54.18 ms
DNS INT: 2.66 ms (server.local)


proxmox:~# pveperf
CPU BOGOMIPS: 128040.65
REGEX/SECOND: 970899
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 687.17 MB/sec
AVERAGE SEEK TIME: 0.35 ms
FSYNCS/SECOND: 1861.16
DNS EXT: 52.15 ms
DNS INT: 2.62 ms (server.local)
 
I just received and installed my new LSI 9265-8i card with 4 Vertex 3 Max IOPS in RAID0. Here are the first pveperf and hdparm with basic config.
Without any optimization we already get better FSYNCS.
I will later install the fast path for ssd optimization and report back.

promoxlsi:~# pveperf
CPU BOGOMIPS: 128356.83
REGEX/SECOND: 976482
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 633.47 MB/sec
AVERAGE SEEK TIME: 0.28 ms
FSYNCS/SECOND: 2338.49
DNS EXT: 33.51 ms
DNS INT: 2.68 ms (local)

promoxlsi:~# pveperf
CPU BOGOMIPS: 128356.83
REGEX/SECOND: 985695
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 629.75 MB/sec
AVERAGE SEEK TIME: 0.26 ms
FSYNCS/SECOND: 2461.47
DNS EXT: 33.73 ms
DNS INT: 2.70 ms (local)

promoxlsi:~# pveperf
CPU BOGOMIPS: 128356.83
REGEX/SECOND: 996061
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 631.22 MB/sec
AVERAGE SEEK TIME: 0.28 ms
FSYNCS/SECOND: 2558.52
DNS EXT: 33.94 ms
DNS INT: 2.74 ms (local)

promoxlsi:~# pveperf
CPU BOGOMIPS: 128356.83
REGEX/SECOND: 982867
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 629.03 MB/sec
AVERAGE SEEK TIME: 0.29 ms
FSYNCS/SECOND: 2530.64
DNS EXT: 33.86 ms
DNS INT: 2.69 ms (local)

promoxlsi:~# pveperf
CPU BOGOMIPS: 128356.83
REGEX/SECOND: 976445
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 629.41 MB/sec
AVERAGE SEEK TIME: 0.28 ms
FSYNCS/SECOND: 2447.59
DNS EXT: 31.91 ms
DNS INT: 2.67 ms (local)


promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.73 seconds = 704.06 MB/sec
promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.76 seconds = 673.74 MB/sec
promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.73 seconds = 705.90 MB/sec
promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.73 seconds = 699.76 MB/sec
promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.73 seconds = 702.79 MB/sec
 
Last edited:
After installing the fastpath hardware key I get the following:
similar buffered reads, more fsyncs.

promoxlsi:~# pveperf;pveperf;pveperf
CPU BOGOMIPS: 128289.45
REGEX/SECOND: 976009
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 626.28 MB/sec
AVERAGE SEEK TIME: 0.27 ms
FSYNCS/SECOND: 3282.59
DNS EXT: 39.09 ms
DNS INT: 2.67 ms (local)


CPU BOGOMIPS: 128289.45
REGEX/SECOND: 970630
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 629.24 MB/sec
AVERAGE SEEK TIME: 0.27 ms
FSYNCS/SECOND: 3304.95
DNS EXT: 41.12 ms
DNS INT: 2.70 ms (local)


CPU BOGOMIPS: 128289.45
REGEX/SECOND: 966748
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 630.48 MB/sec
AVERAGE SEEK TIME: 0.25 ms
FSYNCS/SECOND: 3076.58
DNS EXT: 40.25 ms
DNS INT: 2.59 ms (local)


promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.71 seconds = 725.28 MB/sec


promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.71 seconds = 723.73 MB/sec


promoxlsi:~# hdparm -t /dev/sda1


/dev/sda1:
Timing buffered disk reads: 512 MB in 0.70 seconds = 729.25 MB/sec
 
the phoronix test suite contains a lot of interesting tools.
 
Installation:


Code:
aptitude install build-essential php5 php5-gd php5-cli


Download latest suite from: http://www.phoronix-test-suite.com/?k=downloads, e.g.


Code:
wget http://phoronix-test-suite.com/releases/repo/pts.debian/files/phoronix-test-suite_3.4.0_all.deb

Install with:


Code:
dpkg -i phoronix-test-suite_3.4.0_all.deb
 
What are the test commands that you would recommend with phoronix in order to test:

- disk IO
- disk bandwidth
- anything else?
 
What are the test commands that you would recommend with phoronix in order to test:

- disk IO
- disk bandwidth
- anything else?

Code:
phoronix-test-suite benchmark build-linux-kernel

imo gives some sort of overall performance figure; at least you could compare these results with others made in another thread...
 
This is not the right test for evaluating Disk IO and bandwidth.

However when I tested the build-linux-kernel with the system running on CentoOS, I got Average: 116.19 Seconds
 
All this reading. LOL. Okay, seriously folks. ANYBODY got the Z-Drive R4 to install in proxmox yet ? I'm dying here. I'd LOVE to see the results we get in proxmox, that we get using it on a dedicated server with CentOS 6.2
Also, for those of you who can settle with 'less' performance, ocz offers a VeloDrive that DOES work with proxmox. ( that's what we're using now ).
 
Not to mention folks,... OCZ is releasing a CloudServ product ( PCIe X8 in an x16 size card )... that is boasting perhaps the fastest speeds (again) of all SSD technology.
6GB speed, almost 1.5 million iops etc... but it's very expensive. Probably around 10 grand. But I digress.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!