Hard Drive I/O Question...

oeginc

Member
Mar 21, 2009
133
0
16
I've got a

* Highpoint 2640x4 4-port RAID controller card
* 4 x Seagate 500GB 7,200 RPM 16MB Cache hard drives (http://www.newegg.com/Product/Product.aspx?Item=N82E16822148374) in RAID-10

And then I have
* Western Digital 2TB 7,200 RPM 64MB Cache Hard Drive (http://www.newegg.com/Product/Product.aspx?Item=N82E16822136456)

As you know, I've had problems with I/O and IOPS for a while, so I've been playing around with testing various scenarios and trying to find the optimal solution for my situation.

I've stumbled across something that seemed rather odd to me and was wondering if there was a good explanation.

I know the Highpoint isn't a top of the line Adaptec RAID card or anything, but I've read some reviews and it was rated rather highly and is supposed to have good throughput.

I've tried two different tests:

1a. Write Speed via dd if=/dev/zero of=test.out bs=512k count=64k
1b. Read Speed via dd if=test.out of=/dev/null

2. pveperf

I'm getting:
Highpoint w/4 drives in RAID-10
R = 129 MB/s
W = 60 MB/s

CPU BOGOMIPS: 20042.83
REGEX/SECOND: 821144
HD SIZE: 901.01 GB (/dev/mapper/pve2-machines)
BUFFERED READS: 76.45 MB/sec
AVERAGE SEEK TIME: 16.13 ms
FSYNCS/SECOND: 343.29
DNS EXT: 93.10 ms
DNS INT: 73.25 ms
Single Western Digital Black
R = 121 MB/s
W = 94 MB/s

CPU BOGOMIPS: 20042.83
REGEX/SECOND: 794168
HD SIZE: 1818.03 GB (/dev/mapper/pve3-backup)
BUFFERED READS: 107.96 MB/sec
AVERAGE SEEK TIME: 11.96 ms
FSYNCS/SECOND: 1628.22
DNS EXT: 84.07 ms
DNS INT: 40.56 ms

Which to me seems like the WD Black drive is kicking the RAID's butt... I'd almost prefer to stick with a single WD drive in this case. Why is that? I know the cache on the Western Digital is larger per drive, but overall the same size... I would expect my RAID numbers to be twice what they are.

Am I missing something, or is this to be expected? Is the only way to get decent performance from RAID to go with 8+ drives?
 
Last edited:
did I miss something or do you not have any cache on this raid controller? don´t expect any reasonable performance, also you are using slow notebook disks.

the single drive results looks ok.

as a reference I will post results of an Adaptec 5805z with 4 x 1 TB WD RE3 drives, configured as Raid10 in the next post (the server is just no available for running the test).

and you should use server hard drives instead of desktop or notebook hard drives.
 
No, there is no cache on the controller. Reviews from Toms Hardware gave the controller favorable reviews and even said "Clearly, Highpoint can match Adaptec’s performance using four 15,000 RPM Fujitsu MBA3174RC drives." and "Depending on the I/O benchmark, the Adaptec RAID 5405 provides marginally or up to 15% better I/O performance than Highpoint’s low-budget offering." which is pretty impressive considering the price difference.

I *HAVE* a 5805z RAID setup, and the performance isn't that impressive...

I'm seeing:
R = 169.44 MB/s
W = 125.31 MB/s​

with the same tests as above, and that's on a 5805z with BBU and 8 x 500GB Seagate 7200 RPM drives.

The reason for the "laptop" hard drives is because that's all this server has room for (hence the reason I was checking into iSCSI solutions, but those performance numbers were even worse).

And yes, in a perfect world we should all buy server motherboards with server ethernet cards, server RAM, and server hard drives - but in reality, I'd be willing to bet that by far the majority of users here are using commodity hardware for their ProxMox installations.

That being said, I was trying to determine what could be causing such a drastic difference between a single WD Black drive and the 4 drives in RAID-10... All drives are 7200 RPM...
 
here are my numbers (RAID10, only 4 disks):

with the dd test I get:

Write: 170 MB/s
Read: 289 MB/s

pveperf:
CPU BOGOMIPS: 8534.25
REGEX/SECOND: 269728
HD SIZE: 798.20 GB (/dev/mapper/pve-data)
BUFFERED READS: 247.68 MB/sec
AVERAGE SEEK TIME: 9.37 ms
FSYNCS/SECOND: 2321.30
DNS EXT: 155.42 ms
DNS INT: 1.15 ms

and note, doing a dd or pveperf is just a short shot and not a real and reliable benchmark. but anyways, I see much better numbers on my box - so lets find out whats the difference is. I am using 4 x WD RE3 disks with 1 TB - cache on the disks is disabled, raid controller cache is set to write-back.
 
... but in reality, I'd be willing to bet that by far the majority of users here are using commodity hardware for their ProxMox installations ...

I disagree here. Most guys here knows exactly what someone can expect from different hardware. So the rule is simple. If you want fast hardware use fast hardware. if you go for slow parts don´t expect the same.

Based on my personal (and also reported from others here), cheap raid controllers makes no sense. A lot of Linux users switched from cheap raid controllers to softraid. but in most cases its better to invest in the best storage you can get - means for Proxmox VE - Raid10 with as many disks possible, SAS 15.000 rpm drives. or/and use a fast FC SAN or other high performance storage's.
 
I have fast hardware, I have 16 core systems with 144 GB of RAM and 15K SAS drives, I also have to support a lot of customer hardware that is not top-of-the-line...

but you're missing the point of the original question, it was comparing drives on the same platform and I was only asking if you knew what would cause such a large discrepancy. And I think you'd be surprised how many ProxMox installs are running on less-than-server class hardware... As I said before, I'd be willing to bet it's more than 1/2 of your customer base.

Not everyone is fortunate enough to have the kind of money required to invest in their data centers in order to make them perfect, especially here in the United States...

P.S. Not to mention, I have several customers who have been running their servers on commodity hardware flawlessly for 10 years now, how am I supposed to convince them to "upgrade" to server hardware when what they have is working so well?
 
Last edited:
Not everyone is fortunate enough to have the kind of money required to invest in their data centers in order to make them perfect, especially here in the United States...

You already know that the United States are one of the richest countries in the world?
 
I have fast hardware, I have 16 core systems with 144 GB of RAM and 15K SAS drives, I also have to support a lot of customer hardware that is not top-of-the-line...

but you're missing the point of the original question, it was comparing drives on the same platform and I was only asking if you knew what would cause such a large discrepancy.

I answered, just to repeat the two answers: the raid controller has no cache, therefore I am pretty sure the issues is around caching (disks/raid)
and second, you do not run a real benchmark.

And I think you'd be surprised how many ProxMox installs are running on less-than-server class hardware... As I said before, I'd be willing to bet it's more than 1/2 of your customer base.

Hey, you know our customers better than we do - Cool!
 
You already know that the United States are one of the richest countries in the world?

Yes, we're ranked #7 out of the top 10. And did you know that less than 1% of the US population is responsible for more than 50% of the wealth in the United States? Did you know that we're in one of the worst depressions/recessions we've ever seen? Did you know that our country is almost $14 TRILLION dollars in debt right now? In 2009 almost 3 million americans lost their homes? That's approximately 1 in 45 home owners, and that's been going on for the past several years... So yes, I know all about the United States and the people that live in it.

And that's how you justify replying to every question with "Buy Server Hardware"? I'm not here to debate which is better... Sure, with an unlimited budget server hardware is the way to go. But I know for a fact that a significant number of people don't go that route. We run a fairly large data center here in the states, and co-locate at 27 others as well as consult with hundreds of companies around the world. By far the majority of them are not running server-class hardware and getting them to switch when what they've been using has been working for years isn't going to happen...

So we can argue about something completely irrelevant to the question, or we can try to help each other. I was simply curious as to why I might be seeing such poor speeds out of a RAID controller that has been tested by numerous people as being pretty much on par with the Adaptec 5805z.

I appreciate everything you guys have done and continue to do, don't get me wrong - I honestly do... I'm just a little frustrated that the response to every question is "Buy server hardware", regardless of whether or not it would make a difference. For the record, I'm getting poor performance with my server-class hardware too...

That being said, I upgraded the BIOS on the Highpoint 2640x4 card yesterday, and now this is what I'm getting:

W = 98.5 MB/s
R = 175.0 MB/s

CPU BOGOMIPS: 20042.73
REGEX/SECOND: 786161
HD SIZE: 901.01 GB (/dev/mapper/pve2-machines)
BUFFERED READS: 139.81 MB/sec
AVERAGE SEEK TIME: 16.60 ms
FSYNCS/SECOND: 335.59
DNS EXT: 88.59 ms
DNS INT: 55.73 ms

Still not great, but much better... The fsyncs/second are still a little troublesome to me.
 
...

CPU BOGOMIPS: 20042.73
REGEX/SECOND: 786161
HD SIZE: 901.01 GB (/dev/mapper/pve2-machines)
BUFFERED READS: 139.81 MB/sec
AVERAGE SEEK TIME: 16.60 ms
FSYNCS/SECOND: 335.59
DNS EXT: 88.59 ms
DNS INT: 55.73 ms

Still not great, but much better... The fsyncs/second are still a little troublesome to me.

you are complaining but you do not follow our hints where to check so again -the question is still open: what are your cache setting in your storage stack?

HDD cache: enabled or disabled?
Raid controller cache: available? not available? if yes, what setting?

all these will affect fsyncs dramatically so take a deep look on it!
 
you are complaining but you do not follow our hints where to check so again -the question is still open: what are your cache setting in your storage stack?

HDD cache: enabled or disabled?
Raid controller cache: available? not available? if yes, what setting?

all these will affect fsyncs dramatically so take a deep look on it!

Sorry, I assumed that since you wrote about the controller not having a cache that you knew the controller didn't have a cache. I won't do that next time.

HDD Cache is ON.
Controller Cache is non-existent.

The Highpoint 2640x4 card does not give you ANY user configurable options other than what hard drives do you want in the array, and what type of array do you want. I can't set block size, cache, nothing. And before you say "Well THAT's the problem" please take a look at the reviews of this card from other reputable people - it is supposed to have performance on par with Adaptec.

Quite honestly, in a RAID-10 configuration without cache I'm getting about what I'd expect with only 4 drives (about double the performance of a single drive). The WD Black drive I found out have 2 processors on them to help increase throughput, which could help explain why that drive is so much faster than these run of the mill drives...
 
I answered, just to repeat the two answers: the raid controller has no cache, therefore I am pretty sure the issues is around caching (disks/raid)
and second, you do not run a real benchmark.

I know these aren't real benchmarks that will give me real world performance numbers, but they are useful for comparing one drive to the next on the same system. I was just trying to find out why my performance was so poor. I appreciate you taking the time to help me with this as much as you have.

Hey, you know our customers better than we do - Cool!

You're right, I apologize... I mispoke. I meant to say "ProxMox install base" instead of "your customer base".
 
And that's how you justify replying to every question with "Buy Server Hardware"? I'm not here to debate which is better... Sure, with an unlimited budget server hardware is the way to go.

It is just our experience that good/fast server hardware reduces overall costs (costs/VM).
 
I agree too! When I started playing with Proxmox, I as a bit "upset" by the "don't use sata hd with proxmox" or "your NEED raid controller with bbu" etc. replies, because I was able to do a lot of things with Proxmox in my home (and still do) with sata and such an hardware was (and is) beyond my budget.
I've pushed the firm I collaborate with, to try to sell solution based upon Proxmox and not VMWare (I'm allergic to proprietary software so much...), and to keep it chep they (we?) sold sata based server. But the first time we tested a raid + sas + bbu (unfortunatly at RAID5, since my boss seems not understand that no one here suggests it but pushes for raid10, and sure there are good reasons for that...) we were shocked by the huge difference under load.
Now they always try to sell powerfull servers, but often they have to sell sata, since in many case is "good enough" and for us is much better have a client with virtualization (easy to backup, so no risky upgrade, experiment, fix, installations, etc.). With the really small SME we have in Italy, the money you would spend just for a controller raid+sas HD are more than for an entire dedicated sata server.
Just my experience so far.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!