New Proxmox VE kernels 2.6.18, 2.6.32 and 2.6.35 released to stable

just to mention, no test card (or money) arrived here in our test labs. so looks like your boss decided to not supporting our test labs?
 
I will shoot him an email and find out. He said he would, so likely after he gets back from abroad. Did he contact you at all?
 
yes, he contacted Martin but we still wait for the card/money.
 
Thanks Tom. You should expect something later in the day, from what Urs said.

Anyway, we have replaced the cards with 5405Z/5805Zs, but are getting about 2200 fsyncs/sec with 10k SAS drives in a raid 10, with the .32 kernel (new 1.8 build). We've have the aligned the stripes and re-initialized the array, and are not seeing better results. With 4 RE4's in raid 10, we don't do much better, at about 2400 fsyncs/sec.

At the time, there are no vms running:

pveperf /var/lib/vz
CPU BOGOMIPS: 100802.84
REGEX/SECOND: 721044
HD SIZE: 969.05 GB (/dev/mapper/pve-data)
BUFFERED READS: 418.48 MB/sec
AVERAGE SEEK TIME: 5.89 ms
FSYNCS/SECOND: 2107.16
DNS EXT: 54.61 ms
DNS INT: 14.53 ms (xxxx.com)

dd if=/dev/zero of=/var/lib/vz/temp bs=4k count=4000000 conv=fdatasync
4000000+0 records in
4000000+0 records out
16384000000 bytes (16 GB) copied, 86.1892 s, 190 MB/s

hdparm -Tt /dev/sda

/dev/sda:
Timing cached reads: 5996 MB in 2.00 seconds = 2999.98 MB/sec
Timing buffered disk reads: 1438 MB in 3.00 seconds = 479.02 MB/sec

pveversion -v
pve-manager: 1.8-17 (pve-manager/1.8/5948)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.26-1pve4
vzdump: 1.2-12
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.0-3
ksm-control-daemon: 1.0-5

What should we typically see for these cards? Searching through the forums, I saw yesterday some are reporting in the 3000 range. Are there any other tweaks you would recommend/suggest looking at?

Additionally, I have tried the .35 kernel, and results are nearly identical.
 
your results looks not bad.

here are some results (all with 2.6.32)

box1: 5805Z with Raid10 (4 x WD1002FBYS)

Code:
CPU BOGOMIPS:      8534.56
REGEX/SECOND:      684729
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    175.59 MB/sec
AVERAGE SEEK TIME: 9.78 ms
FSYNCS/SECOND:     2433.91
DNS EXT:           26.96 ms
DNS INT:           1.12 ms (proxmox.com)

box2: 5805z with Raid10 (4 x WD5001ABYS)
Code:
CPU BOGOMIPS:      19151.97
REGEX/SECOND:      762016
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    112.59 MB/sec
AVERAGE SEEK TIME: 9.47 ms
FSYNCS/SECOND:     2443.00
DNS EXT:           28.87 ms
DNS INT:           0.70 ms (proxmox.com)

box3: 5805 with Raid10 (6 x ST3250824NS)
Code:
CPU BOGOMIPS:      19151.60
REGEX/SECOND:      772463
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    273.32 MB/sec
AVERAGE SEEK TIME: 9.17 ms
FSYNCS/SECOND:     3748.74
DNS EXT:           25.97 ms
DNS INT:           0.71 ms (proxmox.com)

pls note, pveperf or hdparm are just very basic benchmarks. if you want more details you should do real benchmarks. e.g. with phoronix-test-suite or others - benchmarking is quite complex.
 
Thanks Tom, we will be doing some additional benchmarking with bonnie++ and also awaiting another areca card (seems like one of the active users here has a lot of success with arecas).

Still curious as to why with the pveperf tool running under a Debian build with the 6405 + bbu, but same drives was just under 4k fsyncs/sec. Our the SAS drives are SASII, so 6Gb/s. Perhaps that does make a difference?

Anyway, it may be nice to have a wiki article about controller + disk + cpu/ram + raid level, with basic performance benchmarks so others can get a better feel for building systems. I see there is one about controller cards, but is pretty slim in terms of content. From my experience seems like disk + controller are key factors here for maximizing real-world performance, preventing IO bottlenecks.
 
Last edited:
Feel free to add a wiki article. But hardware is outdated soon so keeping this up2date is quite challenging.
 
Still curious as to why with the pveperf tool running under a Debian build with the 6405 + bbu, but same drives was just under 4k fsyncs/sec. Our the SAS drives are SASII, so 6Gb/s. Perhaps that does make a difference?

You may want to check what (read, write) caching your controller has set. There are also caches you can set per drive attached to the controller card. Setting all this can make a really big difference. Also, if you have no BBU, the controller may disable all caches.

Unfortunately, each hardware RAID card has a different (vendor) tool to do it; some do not allow manipulating drive caches, so your results may be very different.
 
Cool, thanks for the feedback. We were able to build a custom kernel and get it working... but now another question:

Anyone try the (PMC) Adaptec 6805/6405 series? We got a few new servers with this card based on Adaptec saying the card used the same drivers, but this is false. Got it working in a debian install using the drivers from adaptec, but having trouble in proxmox.

Will this driver (Adaptec 6 series) be included in a near release of proxmox? (it flies with the BBU under debian - faster and cheaper than the 5405Z/5805z, and included cable as well!)

Here we go: http://forum.proxmox.com/threads/6521-New-Proxmox-VE-1.8-ISO-with-latest-packages

Adaptec 6 series works out of the box, waiting for your feedback and test results.
 
Very Cool! How is the performance? Did you have a chance to run some benchmarks?
 
Very Cool! How is the performance? Did you have a chance to run some benchmarks?

not really as I do not have free 6 GB/s sata drives here. looking for a sponsor ;-)

but I assume it will be identical to the tests you did with Debian and the driver from Adaptec.
 
looking for a sponsor ;-)

LOL .. I will talk to my boss again during our staff meetings tomorrow ;-)

Also, I assume this driver will be in future releases of all Proxmox releases, correct? And, any chance that the Areca 1880 series will be included? That one blows the adaptec away! (we built a custom kernel, but are not looking forward rebuilding on each kernel update)

Thanks again!
 
LOL .. I will talk to my boss again during our staff meetings tomorrow ;-)

Also, I assume this driver will be in future releases of all Proxmox releases, correct?

works in the current 2.6.32 kernel branch for Proxmox 1.x, based on squeeze (does not work in the 2.6.18er and the current 2.6.35er.)

And, any chance that the Areca 1880 series will be included? That one blows the adaptec away! (we built a custom kernel, but are not looking forward rebuilding on each kernel update)

Thanks again!

as far as I see the needed module is all kernels later 2.6.37/38. the current arcmsr module in squeeze does not support it (yet).
 
Here we go: http://forum.proxmox.com/threads/6521-New-Proxmox-VE-1.8-ISO-with-latest-packages

Adaptec 6 series works out of the box, waiting for your feedback and test results.

I have an Adaptec 6405 running for a couple weeks now. Maybe I'm being a knucklehead and overlooking something but, according to Adaptec documentation, LVM partitioning isn't supported with the 6 series cards under Debian. My understanding is the bare-metal installer creates LVM partitions by default. Wouldn't the installer fail? I attempted a plain Debian 5 install with LVM on the raid array and, while the install looked successful, the machine wouldn't boot.

The way I got everything working was by installing Debian 5 on ext3 and then installing Proxmox VE from the repository. Obviously, I don't have any of the advantages that LVM would provide with this configuration. I turned on a snapshot backup in addition to a sleep/resume backup. The snapshot never shows an error, and the size looks correct, but I really don't know if the snapshot would work if I tried to restore which is why I do both.

Do you have a recommendation of a better way to set this up or is my understanding of the bare-metal installer, LVM, and snapshot backups wrong?
 
I have an Adaptec 6405 running for a couple weeks now. Maybe I'm being a knucklehead and overlooking something but, according to Adaptec documentation, LVM partitioning isn't supported with the 6 series cards under Debian. ...

pls point me to this docu, I can´t believe that. we got a 6805 in our test lab, works here.
 
thanks for the link, you are right, quite strange. but probably this points to an old driver (aacraid 1.1.5). I installed LVM partitioning (just ISO installer from today) without problems.
 
thanks for the link, you are right, quite strange. but probably this points to an old driver (aacraid 1.1.5). I installed LVM partitioning (just ISO installer from today) without problems.

Thanks for the follow up and confirmation that this works fully. I'll be using the new ISO and attempting a bare-metal install with the 6405 this weekend.
 
pls report your results, would be nice if you can also post the results of pveperf (include your hardware details).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!