SAS 15k 300GB disk (NO RAID!) has only 150 FSYNCS/SECOND? Really?

yatesco

Well-Known Member
Sep 25, 2009
211
5
58
Hi all,

Yeah, I know, you should use hardware RAID. However, there is a configuration issue (http://forum.proxmox.com/threads/32...n-on-Dell-R410-with-4-SAS-disks?highlight=SAS) with the hardware RAID so I thought I would turn it off and see what happens. Well the good news is that it works, but the performance isn't exactly mind blowing:

I have just done a fresh install on a SAS 15K 300GB disk (in a Dell R410 if anyone cares) and got this:

Code:
CPU BOGOMIPS:      72352.84
REGEX/SECOND:      525456
HD SIZE:           187.16 GB (/dev/mapper/pve-data)
BUFFERED READS:    139.26 MB/sec
AVERAGE SEEK TIME: 5.63 ms
FSYNCS/SECOND:     166.33

Buffered reads have been higher. but they are acceptable but the FSYNCS make me want to cry! I only bought these because my other proxmox install was struggling with poor IO (Hardware RAID1 with SATA disks, also produced around 100-150 FSYNCS/SECOND).

So my question is have I missed something (and please don't say 'yeah, hardware RAID!' :)). Surely these top of the range, super fast, speed beasts should do more than that? Copying large files between them is fantastically quick, but given what I want to do with these, i.e. virtualise them, I was kinda hoping for more performance...

Am I missing the boat, or does that look wrong?

Col
 
You could check your DRIVES write cache by using:
Code:
hdparm -I /dev/sda
In the output of this command you should find something like:
Code:
Enabled    Supported
   *       SMART feature set
 ...
 ...
           Write cache
Your HW RAID may have disabled this cache to only use it's own cache (data safety...).

Enable it using:
Code:
hdparm -W 1 /dev/sda
Now you should see better fsync performance but also risk loosing data if the power is cut.
 
Last edited:
Hi all,

Yeah, I know, you should use hardware RAID. However, there is a configuration issue (http://forum.proxmox.com/threads/32...n-on-Dell-R410-with-4-SAS-disks?highlight=SAS) with the hardware RAID so I thought I would turn it off and see what happens. Well the good news is that it works, but the performance isn't exactly mind blowing:

I have just done a fresh install on a SAS 15K 300GB disk (in a Dell R410 if anyone cares) and got this:

Code:
CPU BOGOMIPS:      72352.84
REGEX/SECOND:      525456
HD SIZE:           187.16 GB (/dev/mapper/pve-data)
BUFFERED READS:    139.26 MB/sec
AVERAGE SEEK TIME: 5.63 ms
FSYNCS/SECOND:     166.33
Buffered reads have been higher. but they are acceptable but the FSYNCS make me want to cry! I only bought these because my other proxmox install was struggling with poor IO (Hardware RAID1 with SATA disks, also produced around 100-150 FSYNCS/SECOND).

So my question is have I missed something (and please don't say 'yeah, hardware RAID!' :)). Surely these top of the range, super fast, speed beasts should do more than that? Copying large files between them is fantastically quick, but given what I want to do with these, i.e. virtualise them, I was kinda hoping for more performance...

Am I missing the boat, or does that look wrong?

Col

these numbers looks quite normal (no hard drive cache, no raid controller cache) and shows clearly why you need a hardware raid controller with powerful write cache. overall system performance will be low.
 
Dell R710, SAS 15k disks, hardware raid:

HD SIZE: 33.47 GB (/dev/mapper/pve-root)
BUFFERED READS: 147.45 MB/sec
AVERAGE SEEK TIME: 3.82 ms
FSYNCS/SECOND: 2769.28

So as you can see, get hardware raid as soon as possible.
 
Dell R710, SAS 15k disks, hardware raid:

HD SIZE: 33.47 GB (/dev/mapper/pve-root)
BUFFERED READS: 147.45 MB/sec
AVERAGE SEEK TIME: 3.82 ms
FSYNCS/SECOND: 2769.28

So as you can see, get hardware raid as soon as possible.
Nice - what raid was that? Unfortunately the card that I have (in the Dell R410) only offers RAID 1 or 0 and I don't want to give up all my disks to RAID1. Unfortunately only having 2 disks in RAID 1 means I cannot install Debian :(
 
In this configuration we're using 2x146GB SAS 15k disks in raid1 for proxmox, and 4x450 SAS 15k in raid10 for regular vm storage.

The controller is the default for this server, i can get you the name if you trully need it!
 
You could check your DRIVES write cache by using:
Code:
hdparm -I /dev/sda
In the output of this command you should find something like:
Ooops, this doesn't look good :(
Code:
hdparm -I /dev/sda

/dev/sda1:
 HDIO_DRIVE_CMD(identify) failed: Input/output error
 
Last edited:
/dev/sda1:
readonly = 0 (off)
readahead = 256 (on)
geometry = 17769/255/63, sectors = 1048576, start = 64

For comparison
 
So hardware RAID 0 (across 4 300GB 15x SAS disks) is still pretty poor:

Code:
pveperf /
CPU BOGOMIPS:      72352.63
REGEX/SECOND:      767962
HD SIZE:           94.49 GB (/dev/pve/root)
BUFFERED READS:    438.09 MB/sec
AVERAGE SEEK TIME: 4.02 ms
FSYNCS/SECOND:     189.28
DNS EXT:           324.65 ms

The controller (according to lspci) is
Code:
02:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS (rev 08)
and
Code:
cat /proc/mpt/summary 
ioc0: LSISAS1068E B3, FwRev=00192f00h, Ports=1, MaxQ=266, IRQ=58

Any hints? I am crying with disappointment at this point - this is all 'stock' Dell R410. I am about to install lsiutil to see if I can't improve the system by randomly choosing some options.
 
I thought RAID0 was the fastest (but with no redundancy). I will try RAID 1 with all the disks.

Installing LSIutil did turn up the fact that write caching was turned off - turning that on increased the FSYNCS to, wait for it, I know, exciting isn't it: 255/Second. Woo.
 
PS:

03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04 )
 
and RAID0 is the fastest, but thats the only difference (besides the whole server, controller etc) between our configs.

I must admit I rotfl'd hard when you said this: turning that on increased the FSYNCS to, wait for it, I know, exciting isn't it: 255/Second. Woo.

thats a great sense of humor :D

i'll wait for your feedback
 
Yeah - I am happy to give you guys root access to the machine and DRAC if it would help?
 
I was curious to see the difference reported by pveperf with and without disk write caching.
For information, my results below :

No RAID, test server.

pve1:/var/log# hdparm -W /dev/sda

/dev/sda:
write-caching = 1 (on)
pve1:/var/log# hdparm -W 0 /dev/sda

/dev/sda:
setting drive write-caching to 0 (off)
write-caching = 0 (off)
pve1:/var/log# pveperf
CPU BOGOMIPS: 20005.09
REGEX/SECOND: 787054
HD SIZE: 94.49 GB (/dev/pve/root)
BUFFERED READS: 72.43 MB/sec
AVERAGE SEEK TIME: 16.42 ms
FSYNCS/SECOND: 104.17
DNS EXT: 72.83 ms
DNS INT: 26.85 ms (philten.com)
pve1:/var/log# hdparm -W 1 /dev/sda

/dev/sda:
setting drive write-caching to 1 (on)
write-caching = 1 (on)
pve1:/var/log# pveperf
CPU BOGOMIPS: 20005.09
REGEX/SECOND: 811105
HD SIZE: 94.49 GB (/dev/pve/root)
BUFFERED READS: 85.70 MB/sec
AVERAGE SEEK TIME: 12.04 ms
FSYNCS/SECOND: 1111.29
DNS EXT: 70.61 ms
DNS INT: 30.43 ms (philten.com)
pve1:/var/log#
 
Wow, write caching really does make a difference to the FSYNCs. Obviously without RAID and a battery backup it is risky yada yada yada :)