OCZ Z-Drive R4 vs Adaptec 6805 with 8 Vertex3 Max IOPs on Intel Xeon X5650

hisaltesse

Well-Known Member
Mar 4, 2009
227
2
58
I am building a server on steroids. Here is my configuration:

Intel Xeon X5650
OCZ Z-Drive R4 600GB (the cheapest fastest enterprise PCI-e Gen2 SSD)
Adaptec 6805 + 8 Vertex3 Max IOPs

Essentially the Z-Drive R4 600GB claims to do up to 2GB/s sequential R/W and up to 250,000/160,000 IOPS Random R/W (4K).

While this adapter controller claims to sustain 2GB/s and peak at 4GB/s.
Vertex3 Max IOPs 120GB claims to do 550/280 MB/s Sustained Sequential R/W and 35,000/75,000 IOPS Random R/W (4K)

I am not sure if the SSD IOPs add up to reflect through the controller of if the controller itself has an IOP limit.

Which one is better and which one should I use for my promox server? This is what I would like to answer.

I just ordered to server and the storage components and should be able to report back my findings next week.

In the meantime has anyone tested the OCZ Z-Drive R4 or the Adaptec 6805 with 8 SSD in RAID10?
 
I am building a server on steroids. Here is my configuration:

Intel Xeon X5650
OCZ Z-Drive R4 600GB (the cheapest fastest enterprise PCI-e Gen2 SSD)
Adaptec 6805 + 8 Vertex3 Max IOPs

Essentially the Z-Drive R4 600GB claims to do up to 2GB/s sequential R/W and up to 250,000/160,000 IOPS Random R/W (4K).

While this adapter controller claims to sustain 2GB/s and peak at 4GB/s.
Vertex3 Max IOPs 120GB claims to do 550/280 MB/s Sustained Sequential R/W and 35,000/75,000 IOPS Random R/W (4K)

I am not sure if the SSD IOPs add up to reflect through the controller of if the controller itself has an IOP limit.

Which one is better and which one should I use for my promox server? This is what I would like to answer.
i would avoid running SSDs within a RAID configuration (with a RAID controller) - this way the OS does not "know" that it has to deal with "TRIM", and it even could be a problem trying to do this manually. I do not know if meanwhile the 6805 itself does "know" about SSD specifics, the 5805 afaik does not.
apart from that - both types of SSD are based on Sandforce controllers, which are fast as hell - when the data dealt with is compressible; if not, it's much less speed (but still not bad, i'd say :)
And i *could* imagine that the 6805 could be a bottleneck at least for peak transfers, because it implements another cache layer (512MB), which adds latency for sequential transfers; i do not know if the 6805 allows to switch off the cache (to test this at least).


I just ordered to server and the storage components and should be able to report back my findings next week.

In the meantime has anyone tested the OCZ Z-Drive R4 or the Adaptec 6805 with 8 SSD in RAID10?


we only run some Intel Extreme SSDs in RAID-1 with 5805s - but those are SLC disks, means they're not that dependent on "careful flash handling"... ;-)
 
Thanks for your comments:

1. The adaptec 6805 supports SSD. I am just not sure whether it is possible to disable the SSD cache and if it is even recommend.

2. No trim support behind a RAID, which is why our second choice is the Z-Drive R4 which through its software does something similar to TRIM.

3. From my research one can increase the SSD performance (and maybe durability) by increasing the default provisioning space from about 7% on most SSDs to 20%. This essentially means reserving and losing 20% of storage and leaving it to the SSD to use for calculation etc… By doing so I would gain in performance and may reduce the chances of needing TRIM anytime soon.

4. Worst case scenario if I decide to go with the 6805 + SSD, I could probably reformat each drive after a couple of years of use or if I can feel the performance hit from not TRIMming after a while.

Do you know how much time before one could see significant performance decrease on an SSD if not using TRIM?
 
Server arrived.
Installed Z-Drive R4 (i can see it initialize at boot)
Installed Adaptec 6805 and created a RAID10 volume.

Now I used the IPMI to load the Proxmox 2.0 and 1.9 images to install. It started with any of them well but stopped at "No Hard Disk Found".

I suspect that Proxmox is not seeing neither of the drives because of driver issues?

Anyone have a clue what to do here?
 
not that it helps but I got an Adaptec 6805 in our lab, works on 1.9 and 2.0.
 
Server arrived.
Installed Z-Drive R4 (i can see it initialize at boot)
Installed Adaptec 6805 and created a RAID10 volume.

Now I used the IPMI to load the Proxmox 2.0 and 1.9 images to install. It started with any of them well but stopped at "No Hard Disk Found".

I suspect that Proxmox is not seeing neither of the drives because of driver issues?

Anyone have a clue what to do here?

<cite>
Driver support in the linux front is as follows:

RedHat Enterprise Linux 6.x

RedHat Enterprise Linux 5.x

CentOS 6.x

CentOS 5.x

SUSE Linux Enterprise Server (SLES) 11
</cite>

...there've even been former revisions of those drives which have been "windows only"...
 
I understand that it words Tom.

However can you test a fresh install on a blank raid array and see if it installs fine? I am sure that once proxmox is installed it will probably work with both drives but right now it is not being seen by the Proxmox installation disk.

Any idea how to make it see these drives?
 
that's what I do here - create a raid array in the raid bios and run the ISO installer. no idea whats wrong on your side. there are no newer drivers available. we updated the aacraid drivers to latest on request by other users and they also reported that it worked fine on their side (also one of them donated one to our test lab, so I have one here).

does your box see the raid volume with other linux installations cd´s?
 
Thanks Tom.. I will check with other linux installation.

Now about the Z-Drive R4, they just released linux drivers for RedHat 5x and 6x here:

http://www.oczenterprise.com/drivers.html

Could you please consider adding it to the proxmox build?

In the meantime, how do I install proxmox on this drive? What is the process?
Do I need to first install debian and then proxmox? I'd appreciate some guidance.

Thanks.
 
We cannot include proprietary drivers in our ISO. but as it does not boot anyways you need to install on a second disk and then add the driver.

As always, if we were ask to add or give help with some special hardware, its useful to get the same here in our labs.
 
I would like to confirm that the Adaptec 6805 card works.

The problem earlier was that every once out of 4 boots the adapter card would show in the boot process. The other times it would not.

I am not sure if there is an issue with having both the Z-drive and the adaptec controller on both PCI-8x one on top of the other. But I unplugged the z-drive and the adapter now loaded fine.

Any thoughts?
 
I installed Proxmox 2 beta on Adaptec 6805 Raid controller with 8 SSD in RAID 10.

With no container/VM and under no load I get these results for pveperf (3 consecutive pveperf commands)

CPU BOGOMIPS: 128218.42
REGEX/SECOND: 1070028
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 546.12 MB/sec
AVERAGE SEEK TIME: 0.40 ms
FSYNCS/SECOND: 1651.57
DNS EXT: 37.62 ms
DNS INT: 2.68 ms (local)



root@proxmox:~# pveperf
CPU BOGOMIPS: 128218.42
REGEX/SECOND: 1063682
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 521.74 MB/sec
AVERAGE SEEK TIME: 0.38 ms
FSYNCS/SECOND: 1785.09
DNS EXT: 34.30 ms
DNS INT: 2.55 ms (local)



root@proxmox:~# pveperf
CPU BOGOMIPS: 128218.42
REGEX/SECOND: 1096212
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 561.03 MB/sec
AVERAGE SEEK TIME: 0.38 ms
FSYNCS/SECOND: 1779.83
DNS EXT: 38.43 ms
DNS INT: 2.58 ms (local)


I am going to now install proxmox 1.9 and test it under no load, with a few containers and under load.

If anyone has some more robust benchmarks that you think would better highlight the performance of my configuration feel free to let me know.
 
Hi,
round 550MB/s with 8-SSD in raid-10 is not very much! Perhaps you should test another raid-controller.

I had done an short test some month ago with 4*SSD (Vertex 2) in raid0 (your 8*raid10 should be slightly better):
more perf_ssd_raid0_4disks
CPU BOGOMIPS: 24083.00
REGEX/SECOND: 971852
HD SIZE: 98.43 GB (/dev/mapper/ssd--test--vg-ssd--test)
BUFFERED READS: 827.39 MB/sec
AVERAGE SEEK TIME: 0.25 ms
FSYNCS/SECOND: 5914.89
DNS EXT: 68.29 ms
DNS INT: 28.51 ms

Udo
 
I agree. I am trying to figure out what the bottleneck is. I expected no less than 1GB/s.
Still testing.
 
Two issues:

1. What is Proxmox 1.9 based on? Debian 6 or Debian 5?
I cannot find where it is clearly listed and I think the proxmox team should present this information more clearly.

I am not a debian user other than for proxmox.. and I only use Centos on Containers. Yet every time I have to contact hardware support for my server I am asked what Linux OS I run and I cannot tell as on the pve.proxmox.com it is only listed clearly from some versions of proxmox and not all of them.



2. When trying to install the Z-Drive R4 driver using the insmod command I get "Invalid module format" error.

proxmox:/home# insmod ocz10xx.ko
insmod: error inserting 'ocz10xx.ko': -1 Invalid module format

Any idea why I am not able to install this driver? Is there any other way to install it? (for a pci ssd drive)
 
There is definitely something that does not add up here.

I reset my Adaptec and create an array of 8 SSD in RAID0.

However after installing proxmox and under no load i get only about 560MB/s.

proxmox:~# pveperf
CPU BOGOMIPS: 128167.48
REGEX/SECOND: 983685
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 562.65 MB/sec
AVERAGE SEEK TIME: 0.37 ms
FSYNCS/SECOND: 1725.74
DNS EXT: 35.51 ms
DNS INT: 2.74 ms (myserver.local)


This is not normal.. something is not right.

Tom what performance did you get with your Adaptec 6805?
 
Two issues:

1. What is Proxmox 1.9 based on? Debian 6 or Debian 5?
I cannot find where it is clearly listed and I think the proxmox team should present this information more clearly.

I am not a debian user other than for proxmox.. and I only use Centos on Containers. Yet every time I have to contact hardware support for my server I am asked what Linux OS I run and I cannot tell as on the pve.proxmox.com it is only listed clearly from some versions of proxmox and not all of them.


what about 'cat /etc/debian_version'?

2. When trying to install the Z-Drive R4 driver using the insmod command I get "Invalid module format" error.

proxmox:/home# insmod ocz10xx.ko
insmod: error inserting 'ocz10xx.ko': -1 Invalid module format

Any idea why I am not able to install this driver? Is there any other way to install it? (for a pci ssd drive)

- what does "dmesg" say after trying to load module?
- what does "modinfo xxxx.ko" say about the module?
 
what about 'cat /etc/debian_version'?

5.0.8

- what does "dmesg" say after trying to load module?

It says:

proxmox:~# dmesg
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-6-pve (root@oahu) (gcc version 4.3.2 (Debian 4.3.2-1.1) ) #1 SMP Mon Sep 26 06:32:53 CEST 2011
Command line: root=/dev/mapper/pve-root ro
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls

[… IT IS TOO LONG TO POST HERE…]

FS-Cache: Loaded
FS-Cache: Netfs 'nfs' registered for caching
nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
tun: Universal TUN/TAP device driver, 1.6
tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
vmbr0: no IPv6 routers present
venet0: no IPv6 routers present
eth0: no IPv6 routers present
warning: `ntpd' uses 32-bit capabilities (legacy support in use)



- what does "modinfo xxxx.ko" say about the module?

proxmox:~# modinfo ocz10xx.ko
filename: ocz10xx.ko
version: 1.1.1000.01N
license: GPL
description: OCZ Linux driver
author: OCZ Technology Group Inc.
srcversion: C0DE2CBA7F3E644EC6C5041
alias: pci:v00001B85d00001084sv*sd*bc*sc*i*
alias: pci:v00001B85d00001083sv*sd*bc*sc*i*
alias: pci:v00001B85d00001044sv*sd*bc*sc*i*
alias: pci:v00001B85d00001043sv*sd*bc*sc*i*
alias: pci:v00001B85d00001042sv*sd*bc*sc*i*
alias: pci:v00001B85d00001041sv*sd*bc*sc*i*
alias: pci:v00001B85d00001022sv*sd*bc*sc*i*
alias: pci:v00001B85d00001021sv*sd*bc*sc*i*
alias: pci:v00001B85d00001080sv*sd*bc*sc*i*
depends: scsi_mod
vermagic: 2.6.18-238.19.1.el5 SMP mod_unload gcc-4.1
parm: mv_msi_enable: Enable MSI Support for OCZ VCA controllers (default=0) (int)
 
I also tried with their RedHat 6 drive and I get the same error as follows:

proxmox:~# modinfo ocz10xx.ko
filename: ocz10xx.ko
version: 1.1.1000.01N
license: GPL
description: OCZ Linux driver
author: OCZ Technology Group Inc.
srcversion: 6C92A7485221C8603A8B1EC
alias: pci:v00001B85d00001084sv*sd*bc*sc*i*
alias: pci:v00001B85d00001083sv*sd*bc*sc*i*
alias: pci:v00001B85d00001044sv*sd*bc*sc*i*
alias: pci:v00001B85d00001043sv*sd*bc*sc*i*
alias: pci:v00001B85d00001042sv*sd*bc*sc*i*
alias: pci:v00001B85d00001041sv*sd*bc*sc*i*
alias: pci:v00001B85d00001022sv*sd*bc*sc*i*
alias: pci:v00001B85d00001021sv*sd*bc*sc*i*
alias: pci:v00001B85d00001080sv*sd*bc*sc*i*
depends:
vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions
parm: mv_msi_enable: Enable MSI Support for OCZ VCA controllers (default=0) (int)
parm: mv_prot_mask:host protection mask (uint)

proxmox:~# isnmod ocz10xx.ko -bash: isnmod: command not found
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!