I've been trying to figure out why I have been seeing so many io delays on my Proxmox Host (HP DL360-G5, Xeon(R) CPU E5335 w/6GB RAM and (4) 10,000RPM SAS disks). This is not a heavily utilized machine (load average: 0.15, 0.54, 0.68), but I'd like it to perform better if I need if for a bigger workload.
Here's what I'm running...
Here's the output of pveperf...
I almost always have io delays ranging from 2% - 50%+. I've read about the kernels greater than 2.6.29 being slow with this raid controller bugzilla, and I've adjusted my read_ahead_kb to 64, so that the output of hdparm shows this...
This was about 13MB/sec with the default kernel options, so it's better, but these still seem like terrible read speeds for a 4 disk RAID5 array imo. Has anyone run into this and found a better workaround or a fix for this issue?
Thanks.
Here's what I'm running...
Code:
proxmox1:~# pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.6-24
pve-kernel-2.6.32-4-pve: 2.6.32-24
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-2
ksm-control-daemon: 1.0-4
Here's the output of pveperf...
Code:
proxmox1:~# pveperf
CPU BOGOMIPS: 16001.36
REGEX/SECOND: 673117
HD SIZE: 50.45 GB (/dev/mapper/pve-root)
BUFFERED READS: 28.73 MB/sec
AVERAGE SEEK TIME: 12.08 ms
FSYNCS/SECOND: 3.08
DNS EXT: 111.54 ms
DNS INT: 9.27 ms (clarkcc.com)
I almost always have io delays ranging from 2% - 50%+. I've read about the kernels greater than 2.6.29 being slow with this raid controller bugzilla, and I've adjusted my read_ahead_kb to 64, so that the output of hdparm shows this...
Code:
proxmox1:~# hdparm -Tt /dev/cciss/c0d0
/dev/cciss/c0d0:
Timing cached reads: 5196 MB in 2.00 seconds = 2599.62 MB/sec
Timing buffered disk reads: 260 MB in 3.02 seconds = 86.04 MB/sec
This was about 13MB/sec with the default kernel options, so it's better, but these still seem like terrible read speeds for a 4 disk RAID5 array imo. Has anyone run into this and found a better workaround or a fix for this issue?
Thanks.
Last edited: