Here is the hardware I'm reviewing:
Server 1 & 2:
=============
==============
==============
Server 1:
=========
=========
The first thing that jumps out at me is that the average read/write speeds appear to be below the average for these drives (as I measured before), however the Transactions per Second far exceed what I expected let alone what I thought was possible for these drives. From the data above, it looks like my backup drives are getting hit the hardest with approximately 8,000 transactions per second (which makes sense), but it's only a single drive - while my raid array is only getting hit with ~3,000 transactions per second but seems to be max'd out...
Does this make sense? Is it just because Highpoint 2640x4 is such a lame RAID card? I'd like your input on my current situation, what you would do different, and what type of hardware/setup you'd recommend to make the situation better.
I'm looking to go HA when it because available (if not rolling my own sooner than that), so I was thinking of building some sort of home-brew SAN and storing the VM's and backups on there, but I'm not sure the network could withstand the kind of throughput I would need..
Thanks!
Server 1 & 2:
=============
AMD Phenom 9850 Quad Core CPU
Gigabyte Motherboard w/8GB RAM
Highpoint 2640x4 RAID Controller Card
1 x Seagate 7,200 SATA 2 Boot Drive (500GB)
1 x Western Digital Black 7,200 SATA 2 Backup Drive (2TB)
4 x Seagate 7,200 SATA 2 drives in RAID-10 config for VM's
Server #1 has:Gigabyte Motherboard w/8GB RAM
Highpoint 2640x4 RAID Controller Card
1 x Seagate 7,200 SATA 2 Boot Drive (500GB)
1 x Western Digital Black 7,200 SATA 2 Backup Drive (2TB)
4 x Seagate 7,200 SATA 2 drives in RAID-10 config for VM's
==============
2 x Webserving VM's
1 x Database VM
Server #2 has:1 x Database VM
==============
2 x Nameserver VM's
1 x Incoming Mail Server
1 x Outgoing Mail Server
I've been graphing server information with Cacti for quite some time trying to figure out where our performance bottle neck is. I've suspected the hard drives/RAID controller card for a while now because we regularly see 85%+ I/O Wait times. I recently wrote a small script to monitor the Transactions per Second as well as MB/sec throughput to/from the hard drives. This is what I'm seeing:1 x Incoming Mail Server
1 x Outgoing Mail Server
Server 1:
=========
Avg. CPU Usage: 16.72% / Max: 67.42%
Load Average: 11.11
Ethernet Inbound Avg: 19.64 Mbit/sec / Max: 77.63 Mbit/sec
Ethernet Outbound Avg: 6.79 Mbit/sec / Max: 79.86 Mbit/sec
Ethernet 95th Percentile: 53.1 Mbit/sec
Root drive:
===========
=========================
=============
Server 2:Load Average: 11.11
Ethernet Inbound Avg: 19.64 Mbit/sec / Max: 77.63 Mbit/sec
Ethernet Outbound Avg: 6.79 Mbit/sec / Max: 79.86 Mbit/sec
Ethernet 95th Percentile: 53.1 Mbit/sec
Root drive:
===========
Read Avg: 252.99 KiB/sec / Max: 459.10 KiB/sec
Write Avg: 28.98 MiB/sec / Max: 57.85 MiB/sec
Transactions Avg: 82.09/sec / Max: 138.05/sec
VM Drive (RAID-10 Array):Write Avg: 28.98 MiB/sec / Max: 57.85 MiB/sec
Transactions Avg: 82.09/sec / Max: 138.05/sec
=========================
Read Avg: 30.02 MiB/sec / Max: 30.18 MiB/sec
Write Avg: 21.82 MiB/sec / Max: 39.30 MiB/sec
Transactions Avg: 3,460/sec / Max: 4,600/sec
Backup Drive:Write Avg: 21.82 MiB/sec / Max: 39.30 MiB/sec
Transactions Avg: 3,460/sec / Max: 4,600/sec
=============
Read Avg: 55.61 MiB/sec / Max: 74.36 MiB/sec
Write Avg: 15.07 MiB/sec / Max: 35.06 MiB/sec
Transactions Avg: 7,980/sec / Max: 8,230/sec
Write Avg: 15.07 MiB/sec / Max: 35.06 MiB/sec
Transactions Avg: 7,980/sec / Max: 8,230/sec
=========
Avg. CPU Usage: 18.87% / Max: 100%
Load Average: 2.30
Ethernet Inbound Avg: 6.01 Mbit/sec / Max: 79.92 Mbit/sec
Ethernet Outbound Avg: 20.84 Mbit/sec / Max: 80.00 Mbit/sec
Ethernet 95th Percentile: 55.13 Mbit/sec
Root drive:
===========
=========================
=============
Server #2 has only been collecting data for about 36 hours now, so that's probably why it's numbers look a little skewed. Server #1 has only been collecting data for about 72 hours now, so take that into account...Load Average: 2.30
Ethernet Inbound Avg: 6.01 Mbit/sec / Max: 79.92 Mbit/sec
Ethernet Outbound Avg: 20.84 Mbit/sec / Max: 80.00 Mbit/sec
Ethernet 95th Percentile: 55.13 Mbit/sec
Root drive:
===========
Read Avg: 7.82 MiB/sec / Max: 7.82 MiB/sec
Write Avg: 2.59 MiB/sec / Max: 2.59 MiB/sec
Transactions Avg: 679.22 / Max: 679.22
VM Drive (RAID-10 Array):Write Avg: 2.59 MiB/sec / Max: 2.59 MiB/sec
Transactions Avg: 679.22 / Max: 679.22
=========================
Read Avg: 22.83 MiB/sec / Max: 22.83 MiB/sec
Write Avg: 10.84 MiB/sec / Max: 10.81 MiB/sec
Transactions Avg: 2,640/sec / Max: 2,900/sec
Backup Drive:Write Avg: 10.84 MiB/sec / Max: 10.81 MiB/sec
Transactions Avg: 2,640/sec / Max: 2,900/sec
=============
Read Avg: 49.69 MiB/sec / Max: 111.47 MiB/sec
Write Avg: 29.56 MiB/sec / Max: 33.39 MiB/sec
Transactions Avg: 7,250/sec / Max: 8,380/sec
Write Avg: 29.56 MiB/sec / Max: 33.39 MiB/sec
Transactions Avg: 7,250/sec / Max: 8,380/sec
The first thing that jumps out at me is that the average read/write speeds appear to be below the average for these drives (as I measured before), however the Transactions per Second far exceed what I expected let alone what I thought was possible for these drives. From the data above, it looks like my backup drives are getting hit the hardest with approximately 8,000 transactions per second (which makes sense), but it's only a single drive - while my raid array is only getting hit with ~3,000 transactions per second but seems to be max'd out...
Does this make sense? Is it just because Highpoint 2640x4 is such a lame RAID card? I'd like your input on my current situation, what you would do different, and what type of hardware/setup you'd recommend to make the situation better.
I'm looking to go HA when it because available (if not rolling my own sooner than that), so I was thinking of building some sort of home-brew SAN and storing the VM's and backups on there, but I'm not sure the network could withstand the kind of throughput I would need..
Thanks!
Last edited: