Thanks Tom. You should expect something later in the day, from what Urs said.
Anyway, we have replaced the cards with 5405Z/5805Zs, but are getting about 2200 fsyncs/sec with 10k SAS drives in a raid 10, with the .32 kernel (new 1.8 build). We've have the aligned the stripes and re-initialized the array, and are not seeing better results. With 4 RE4's in raid 10, we don't do much better, at about 2400 fsyncs/sec.
At the time, there are no vms running:
pveperf /var/lib/vz
CPU BOGOMIPS: 100802.84
REGEX/SECOND: 721044
HD SIZE: 969.05 GB (/dev/mapper/pve-data)
BUFFERED READS: 418.48 MB/sec
AVERAGE SEEK TIME: 5.89 ms
FSYNCS/SECOND: 2107.16
DNS EXT: 54.61 ms
DNS INT: 14.53 ms (xxxx.com)
dd if=/dev/zero of=/var/lib/vz/temp bs=4k count=4000000 conv=fdatasync
4000000+0 records in
4000000+0 records out
16384000000 bytes (16 GB) copied, 86.1892 s, 190 MB/s
hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 5996 MB in 2.00 seconds = 2999.98 MB/sec
Timing buffered disk reads: 1438 MB in 3.00 seconds = 479.02 MB/sec
pveversion -v
pve-manager: 1.8-17 (pve-manager/1.8/5948)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.26-1pve4
vzdump: 1.2-12
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.0-3
ksm-control-daemon: 1.0-5
What should we typically see for these cards? Searching through the forums, I saw yesterday some are reporting in the 3000 range. Are there any other tweaks you would recommend/suggest looking at?
Additionally, I have tried the .35 kernel, and results are nearly identical.