Performance/fsync on HP ML350 G5

svendsen

Renowned Member
Apr 18, 2012
60
1
73
Hi guys/girls,

Just installed Proxmox on an "old" ML 350 G5 server with following specs:

- 1600 MHz quad core
- 16GB RAM
- 4 x 300GB SAS, 10k RPM

RAID is configued as RAID 10, and write-cache should be enabled - as far as I know - and there should be battery module installed (and charged).
I've installed some PSP tools, and hpacucli gives following:

Smart Array E200i in Slot 0 (Embedded)
Bus Interface: PCI
Slot: 0
Serial Number: QL77MW2032
Cache Serial Number: P9A3A0B9SUV7AW
RAID 6 (ADG) Status: Disabled
Controller Status: OK
Hardware Revision: Rev A
Firmware Version: 1.86
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 15 secs
Surface Scan Mode: Idle
Post Prompt Timeout: 0 secs
Cache Board Present: True
Cache Status: OK
Accelerator Ratio: 50% Read / 50% Write
Drive Write Cache: Enabled
Total Cache Size: 128 MB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: False
Array: A
Interface Type: SAS
Unused Space: 0 MB
Status: OK


So everything looks fine, but pveperf gives these numbers:

CPU BOGOMIPS: 12800.39
REGEX/SECOND: 572899
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 194.73 MB/sec
AVERAGE SEEK TIME: 5.39 ms
FSYNCS/SECOND: 943.62
DNS EXT: 86.33 ms
DNS INT: 81.45 ms (local)


As far as I can read on the forum, the fsync value should have been 2-3 times higher with my specs. Am I right/wrong? Any ideas?
 
I had similar speeds on an IBM SystemX 3650 box. Turned out that the battery had died on the raid card (BBU) and the card failed back to "Write Through" mode.
Check that you don't have a similar issue or that it is set to "Write Back" on the raid set.
I'd take a guess that Disk Cache is turned on because on the IBM with that turned off the FSYNCS went as low as 22.
 
Just replaced new batteries.
While the battery is charging I get following pveperf results:

root@proxmox:~# pveperf
CPU BOGOMIPS: 12803.61
REGEX/SECOND: 617419
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 216.39 MB/sec
AVERAGE SEEK TIME: 5.41 ms
FSYNCS/SECOND: 181.65
DNS EXT: 75.24 ms
DNS INT: 81.92 ms (local)


So it seems as the inital ~900 in Fsyncs/Sec was WITH fully working battery.

Do you have other tips/ideas?
 
do you use ext3 or ext4? ext4 is know for bad fsyncs.
 
I use ext4 but have tried to format the data volume as ext3 and run pveperf on that. Same results.
 
Thanks for your tip snowman. I have checked that I'm running the latest firmware, and have played a little with the ratio - doesn't seem to have any influence.
I have tried to copy the pveperf to another equal server with 5 disks in RAID5, I get more or less same fsync values.

So I think the bottleneck is the e200i controller with "only" 128MB cache + BBU
 
E200i is crap!

Thanks for your tip snowman. I have checked that I'm running the latest firmware, and have played a little with the ratio - doesn't seem to have any influence.
I have tried to copy the pveperf to another equal server with 5 disks in RAID5, I get more or less same fsync values.

So I think the bottleneck is the e200i controller with "only" 128MB cache + BBU

Hi,

the fsync numbers are in order for E200i Controller. It is an really awful controller for virtualization. I tried also everything to improve performance but it is useless. This controller is maxed out at 900 fsyncs/s. I used Seagate ES Sata Drives and WD1003FBYX, write performance was catastrophal, read performance sometimes at 90MB/s (SATA) but this was max. If you have I/O intensive guests this is a knockout.

Get an P400 Controller with 512MB BBWC for exchange. (used 50-100€)

PS: On a manual configured KVM System on gentoo with software raid and the same disks (WD1003FBYX) connected to mainboard controller, I got twice !! the performance as with E200i.

Regards
Eike
 
Last edited:
Re: E200i is crap!

Hi,

the fsync numbers are in order for E200i Controller. It is an really awful controller for virtualization. I tried also everything to improve performance but it is useless. This controller is maxed out at 900 fsyncs/s. I used Seagate ES Sata Drives and WD1003FBYX, write performance was catastrophal, read performance sometimes at 90MB/s (SATA) but this was max. If you have I/O intensive guests this is a knockout.

Get an P400 Controller with 512MB BBWC for exchange. (used 50-100€)

PS: On a manual configured KVM System on gentoo with software raid and the same disks (WD1003FBYX) connected to mainboard controller, I got twice !! the performance as with E200i.

Regards
Eike

Today I changed the E200 with a P400 / 512MB... The fsyncs are now more than twice as fast.. (2000 fsyncs/s)
 
Re: E200i is crap!

Yes, we have changed to P400/512MB too. We bought old stock 4-5 cards and changed controllers on all our M350s. It has HUGE performance improvement!!
We now see fsyncs up to 2800 fsyncs/s
Thanks for the tip Eike!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!