1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

VERY slow disk IO on OVH dedicated server..

Discussion in 'Proxmox VE: Installation and configuration' started by wipeout_dude, Aug 21, 2012.

  1. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    Hi,

    Have setup ProxmoxVE on an OVH dedicated server using their install..

    The disk IO performance is VERY bad.. Has anyone else used their servers and worked out how to speed things up??

    Thanks.

    ~# pveperf /vz/
    CPU BOGOMIPS: 44685.28
    REGEX/SECOND: 1120717
    HD SIZE: 903.80 GB (/dev/mapper/pve-data)
    BUFFERED READS: 121.79 MB/sec
    AVERAGE SEEK TIME: 14.54 ms
    FSYNCS/SECOND: 17.81
    DNS EXT: 42.88 ms
    DNS INT: 3.01 ms (kimsufi.com)
     
  2. spirit

    spirit Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 2, 2010
    Messages:
    2,697
    Likes Received:
    33
    What is the disk hardware ? do you use hardware raid ?

    (Kimsufi are the very low entry in ovh, so don't expect good performance)
     
  3. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    Thanks for the reply..

    We have two servers there now one is kimsufi and the other is OVH.. Both have shocking performance..

    No Hardware RAID on either but at less than 100 FSYNCS/Sec on both, the kimsufi one as above with less than 20 FSYNCS/Sec..

    Even my old Core2 desktop in my office that I use for testing things with old 500GB Seagate drives is able to get ~600 FSYNCS/Sec.. I don't understand why an old desktop can get more than 6 times the performance of a Xeon based server..

    There must be a reason because it just can't be THAT bad but I haven't had time to break it down to work it out yet.. Was hoping someone would have an idea.. :)
     
  4. Kaya

    Kaya Member

    Joined:
    Jun 20, 2012
    Messages:
    102
    Likes Received:
    0
    Do u have write cache and read cache enabled on disk?
     
  5. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    Didn't change anything.. It's as it came..

    I have never changed anything on my test PC either, it just worked out of the box..
     
  6. Kaya

    Kaya Member

    Joined:
    Jun 20, 2012
    Messages:
    102
    Likes Received:
    0
    What is the ouptut of
    hdparm /dev/sdX ?
     
  7. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    On the slower of the two servers..

    Code:
    root@in1:~# hdparm /dev/sd[abcd]
    
    /dev/sda:
     multcount     = 16 (on)
     IO_support    =  0 (default)
     readonly      =  0 (off)
     readahead     = 256 (on)
     geometry      = 121601/255/63, sectors = 1953525168, start = 0
    
    
    /dev/sdb:
     multcount     = 16 (on)
     IO_support    =256 (???)
     readonly      =  0 (off)
     readahead     = 256 (on)
     geometry      = 121601/255/63, sectors = 1953525168, start = 0
    
    
    /dev/sdc:
     multcount     = 16 (on)
     IO_support    =256 (???)
     readonly      =  0 (off)
     readahead     = 256 (on)
     geometry      = 121601/255/63, sectors = 1953525168, start = 0
    
    
    /dev/sdd:
     multcount     = 16 (on)
     IO_support    =256 (???)
     readonly      =  0 (off)
     readahead     = 256 (on)
     geometry      = 121601/255/63, sectors = 1953525168, start = 0
    
    
    
    On the Xeon server..

    Code:
    root@in2:~# hdparm /dev/sd[cd]
    
    /dev/sdc:
     multcount     =  0 (off)
     IO_support    =  1 (32-bit)
     readonly      =  0 (off)
     readahead     = 256 (on)
     geometry      = 243201/255/63, sectors = 3907029168, start = 0
    
    
    /dev/sdd:
     multcount     =  0 (off)
     IO_support    =257 (???)
     readonly      =  0 (off)
     readahead     = 256 (on)
     geometry      = 243201/255/63, sectors = 3907029168, start = 0
    
    
    
     
  8. Kaya

    Kaya Member

    Joined:
    Jun 20, 2012
    Messages:
    102
    Likes Received:
    0
    Try a speed test on the slower one:

    hdparm -Tt /dev/sda
     
  9. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    I have run those tests and get >100MB/s (Similar to pveperf result seen in the original post).. The issue doesn't appear to be raw throughput but IO/transactional performance which seems odd..
     
  10. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    14,331
    Likes Received:
    79
    Yes, you only have 17.81 FSYNCS/SECOND. This indicates some kind of disks cache problem (maybe disk cache turned off?)
     
  11. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    Probably a dumb question but how do you enable/disable the cache on SATA disks directly? (there is no hardware raid controller with and form of battery backed cache)

    Thanks..
     
  12. snowman66

    snowman66 Member
    Proxmox VE Subscriber

    Joined:
    Dec 1, 2010
    Messages:
    247
    Likes Received:
    1
    Check with:

    Code:
    proxmox-virt01:~# hdparm -W /dev/sda
    
    /dev/sda:
     write-caching =  1 (on)
     
  13. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    Appears to be on..

    Code:
     hdparm -W /dev/sd[abcd]
    
    /dev/sda:
     write-caching =  1 (on)
    
    
    /dev/sdb:
     write-caching =  1 (on)
    
    
    /dev/sdc:
     write-caching =  1 (on)
    
    
    /dev/sdd:
     write-caching =  1 (on)
    
    
    
     
  14. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,101
    Likes Received:
    55
    what file system do you use, ext4? post the output of 'mount'
     
  15. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    Was originally ext4.. Now setup with Btrfs and have attempted using Btrfs in a RAID10 configuration..
    Code:
    # btrfs filesystem df /var/lib/vzData, RAID10: total=10.00GB, used=8.10GB
    Data: total=8.00MB, used=0.00
    System, RAID10: total=16.00MB, used=4.00KB
    System: total=4.00MB, used=0.00
    Metadata, RAID10: total=2.00GB, used=40.86MB
    Metadata: total=8.00MB, used=0.00
    
    In the RAID10 configuration of btrfs the FSYNCS/Sec has improved but its still not near where it should be.. I would expect ~800-1000 FSYNCS/SEC in this configuration..

    Code:
    ~# pveperf /var/lib/vzCPU BOGOMIPS:      44689.36
    REGEX/SECOND:      1075265
    HD SIZE:           3596.77 GB (/dev/sda4)
    BUFFERED READS:    179.07 MB/sec
    AVERAGE SEEK TIME: 3.49 ms
    FSYNCS/SECOND:     120.46
    DNS EXT:           40.69 ms
    DNS INT:           45.41 ms (domain.com)
    I can only suspect a hardware issue but can't seem to figure out what it is especially when my years old desktop is able to nail the server in performance..
     
    #15 wipeout_dude, Aug 25, 2012
    Last edited: Aug 25, 2012
  16. snowman66

    snowman66 Member
    Proxmox VE Subscriber

    Joined:
    Dec 1, 2010
    Messages:
    247
    Likes Received:
    1
    Did you install with ext3 on desktop?
     
  17. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,101
    Likes Received:
    55
    btrfs? not really an option. if you donĀ“t like ext3, maybe xfs can make you more happy.

    if your run openvz, ext3 is recommended.
     
  18. wipeout_dude

    wipeout_dude Member

    Joined:
    Jul 15, 2012
    Messages:
    39
    Likes Received:
    0
    Is there an issue with ext4? Is that why you recommend ext3 or xfs?

    I know btrfs is still experimental and being copy-on-write will have a performance overhead but in a raid10 setup I thought the overhead would be mitigated.. Guess I was wrong.. :)
     
  19. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,101
    Likes Received:
    55
    ext3 is fast and stable, recommended for such boxes.
     

Share This Page