SAS / SATA - Which to choose?

michaeljk

Renowned Member
Oct 7, 2009
60
3
73
We are just testing some new HP DL380 G8 machines (32 GB RAM, Smart Array Gen8 Controller with RAID10, 2 CPU's with 8 Cores each + HyperThreading, Proxmox 3). The first server has 4x 1 TB SATA drives in RAID10:

Code:
CPU BOGOMIPS:      127684.80
REGEX/SECOND:      1323724
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    401.60 MB/sec
AVERAGE SEEK TIME: 7.00 ms
FSYNCS/SECOND:     4475.75
DNS EXT:           43.03 ms
DNS INT:           39.48 ms

The second one has 8x 300 GB SAS drives in RAID10:

Code:
CPU BOGOMIPS:      127680.00
REGEX/SECOND:      1207380
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    985.61 MB/sec
AVERAGE SEEK TIME: 3.74 ms
FSYNCS/SECOND:     3425.66
DNS EXT:           37.97 ms
DNS INT:           41.70 ms

As you can see, the SATA drives have a better FSYNC-Value compared to the SAS drives, but lower buffered reads. So which configuration would you choose if you want to run VM's with Debian Linux (KVM, RAW-Files, no VIRTIO for network or storage), used as Webservers (LAMP) with many simultaneous requests? Alternatively, we could also place the storage on external (NFS-) Servers, but I guess that the performance with a Gigabit link will not be good enough. Are there any improvements with Proxmox 3 in terms of cluster storage (Ceph, Sheepdog, ...) ?

We also run some openVZ systems - as far as I know, these can only be stored localy or on NFS storage, right?
 
Hi,
are both servers idle?
Normaly the good fsyncs values are from the raid-controller-cache. And the seektime is much better with SAS.

And for multible parallel IO is SAS also much better. For bulk data is sata ok, but for speed is sas much better.

Udo
 
Both servers are idle, just a fresh install of Proxmox 3 without any VM running. SAS would only give 1,2 TB of diskspace in RAID10, but this is ok if we need the better performance. I'm just wondering how these speeds (400 - 900 MB/sec) can be reached with a central storage, e.g. NFS. A gigabit link could reach 125 MB/sec in theory, even with bonding the speed is only half of the local value.
 
I don't fully understand your question... this seems not to be sata vs sas, but 4x sata vs 8x sas, of course with a good controller the I/O with stripped storage is better if you have more HD. Try 8x sata vs 4x sas and probably you will have the opposite results. You could also try WD Velociraptor SATA series of HD.
Also a 1Gbit link can reach less than 100MB/s (IP stack is not 100% payload), bonding can't reasonable go much further (never tried to bond more than 2 interfaces), and 10Gbit (maybe bonded) seems to be the way to go.
Ceph storage seems that can be a lot fast, even if the test environment was with redundancy disabled have a look at:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-1-introduction-and-rados-bench/
 
I don't fully understand your question... this seems not to be sata vs sas, but 4x sata vs 8x sas, of course with a good controller the I/O with stripped storage is better if you have more HD. Try 8x sata vs 4x sas and probably you will have the opposite results. You could also try WD Velociraptor SATA series of HD.

I choosed 8x SAS because of the diskspace (600 GB wouldn't be enough). But you're right, the SAS-Configuration with stripping should be faster - this is true for the buffered reads, but not for the FSync/sec. Which value is more important?
 
We've used the WD raptor/velociraptor for a log time.

In my experience it comes down to activity vs capacity.

Under low activity and low queue depth the SATA usually outperforms the SAS.

I would suspect this is because any enterprise SATA drives these days have 32-64mb of cache per drive, where most SAS drives see only 16mb of cache.

Under medium load this can be still in SATA's favor with a rockstar raid card (I often use the arc-1880xi 4GB) which I've had good luck with, but keep a spare on-hand because if you have a failure I hear areca has slow RMA times.

Under very high activity the SAS will often win out.

Again, we're talking about load.

We've got a 1TB, 24disk velociraptor array that does amazingly well for under 300 users, with 10 VMs that get hammered during the work day.

In this example they only have a few databases running (Exchange, Sharepoint, a few proprietary DB Based programs, etc).

In this same environment their old san was using 7200RPM desktop sata drives, and they experienced large performance issues.

Just a trick if you are looking for off-the-shelf parts: Western Digital doesn't sell a lot of SAS drives to consumers, but they do sell almost every single drive model, size, speed, etc to their wholesalers. You can for instance seldom find them from newegg or a mainstream consumer vendor but if you reach out to a more wholesale oriented vendor I bet you can find anything you're looking for to meet the size/speed combination you need.