VERY slow disk IO on OVH dedicated server..

wipeout_dude

Member
Jul 15, 2012
39
0
6
Hi,

Have setup ProxmoxVE on an OVH dedicated server using their install..

The disk IO performance is VERY bad.. Has anyone else used their servers and worked out how to speed things up??

Thanks.

~# pveperf /vz/
CPU BOGOMIPS: 44685.28
REGEX/SECOND: 1120717
HD SIZE: 903.80 GB (/dev/mapper/pve-data)
BUFFERED READS: 121.79 MB/sec
AVERAGE SEEK TIME: 14.54 ms
FSYNCS/SECOND: 17.81
DNS EXT: 42.88 ms
DNS INT: 3.01 ms (kimsufi.com)
 
Thanks for the reply..

We have two servers there now one is kimsufi and the other is OVH.. Both have shocking performance..

No Hardware RAID on either but at less than 100 FSYNCS/Sec on both, the kimsufi one as above with less than 20 FSYNCS/Sec..

Even my old Core2 desktop in my office that I use for testing things with old 500GB Seagate drives is able to get ~600 FSYNCS/Sec.. I don't understand why an old desktop can get more than 6 times the performance of a Xeon based server..

There must be a reason because it just can't be THAT bad but I haven't had time to break it down to work it out yet.. Was hoping someone would have an idea.. :)
 
What is the ouptut of
hdparm /dev/sdX ?

On the slower of the two servers..

Code:
root@in1:~# hdparm /dev/sd[abcd]

/dev/sda:
 multcount     = 16 (on)
 IO_support    =  0 (default)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 121601/255/63, sectors = 1953525168, start = 0


/dev/sdb:
 multcount     = 16 (on)
 IO_support    =256 (???)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 121601/255/63, sectors = 1953525168, start = 0


/dev/sdc:
 multcount     = 16 (on)
 IO_support    =256 (???)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 121601/255/63, sectors = 1953525168, start = 0


/dev/sdd:
 multcount     = 16 (on)
 IO_support    =256 (???)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 121601/255/63, sectors = 1953525168, start = 0

On the Xeon server..

Code:
root@in2:~# hdparm /dev/sd[cd]

/dev/sdc:
 multcount     =  0 (off)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 243201/255/63, sectors = 3907029168, start = 0


/dev/sdd:
 multcount     =  0 (off)
 IO_support    =257 (???)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 243201/255/63, sectors = 3907029168, start = 0
 
Try a speed test on the slower one:

hdparm -Tt /dev/sda

I have run those tests and get >100MB/s (Similar to pveperf result seen in the original post).. The issue doesn't appear to be raw throughput but IO/transactional performance which seems odd..
 
I have run those tests and get >100MB/s (Similar to pveperf result seen in the original post).. The issue doesn't appear to be raw throughput but IO/transactional performance which seems odd..

Yes, you only have 17.81 FSYNCS/SECOND. This indicates some kind of disks cache problem (maybe disk cache turned off?)
 
Probably a dumb question but how do you enable/disable the cache on SATA disks directly? (there is no hardware raid controller with and form of battery backed cache)

Thanks..
 
Appears to be on..

Code:
 hdparm -W /dev/sd[abcd]

/dev/sda:
 write-caching =  1 (on)


/dev/sdb:
 write-caching =  1 (on)


/dev/sdc:
 write-caching =  1 (on)


/dev/sdd:
 write-caching =  1 (on)
 
what file system do you use, ext4? post the output of 'mount'
 
Was originally ext4.. Now setup with Btrfs and have attempted using Btrfs in a RAID10 configuration..
Code:
# btrfs filesystem df /var/lib/vzData, RAID10: total=10.00GB, used=8.10GB
Data: total=8.00MB, used=0.00
System, RAID10: total=16.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID10: total=2.00GB, used=40.86MB
Metadata: total=8.00MB, used=0.00

In the RAID10 configuration of btrfs the FSYNCS/Sec has improved but its still not near where it should be.. I would expect ~800-1000 FSYNCS/SEC in this configuration..

Code:
~# pveperf /var/lib/vzCPU BOGOMIPS:      44689.36
REGEX/SECOND:      1075265
HD SIZE:           3596.77 GB (/dev/sda4)
BUFFERED READS:    179.07 MB/sec
AVERAGE SEEK TIME: 3.49 ms
FSYNCS/SECOND:     120.46
DNS EXT:           40.69 ms
DNS INT:           45.41 ms (domain.com)

I can only suspect a hardware issue but can't seem to figure out what it is especially when my years old desktop is able to nail the server in performance..
 
Last edited:
I can only suspect a hardware issue but can't seem to figure out what it is especially when my years old desktop is able to nail the server in performance.

Did you install with ext3 on desktop?
 
btrfs? not really an option. if you don´t like ext3, maybe xfs can make you more happy.

if your run openvz, ext3 is recommended.
 
Is there an issue with ext4? Is that why you recommend ext3 or xfs?

I know btrfs is still experimental and being copy-on-write will have a performance overhead but in a raid10 setup I thought the overhead would be mitigated.. Guess I was wrong.. :)
 
ext3 is fast and stable, recommended for such boxes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!