We had a server crash recently (power supply died, turns out we were running 150 watt power supplies in our 3U rackmount servers and they just couldn't keep up with 6 hard drives, raid card, etc... Surprised it worked at all! LOL!)....
Anyways, we've been having some big problems keeping up with disk I/O, so I thought while the server was down I'd play around a little bit.
I don't have the exact PVE version information, but the server (as it stood) was originally a v1.5 install that had been apt-get upgrade'd to v1.6 and was running a 2.6.18-4 kernel.
The hardware:
Gigabyte motherboard
AMD Phenom x4 9850 quad-core processor
8GB RAM
Highpoint 2640x4 RAID Controller card (I know, I have adaptec 5805z's here just waiting to be swapped out).
1 Hitachi 500GB SATA2 7,200 RPM Drive - Boot & Proxmox
1 Western Digital Black 2TB SATA2 7,200 RPM Drive - VM Backups
4 Segate 500GB SATA2 7,2000 RPM Drives in RAID 10 - Virtual Machines
In the host running dbench on the RAID10 I got ~160MB/sec throughput. In a OpenVZ container (we've been using openvz for everything since I thought it had less overhead) I got ~15MB/sec. There was nothing else running at the time.
So, OBVIOUSLY that's a problem. The first step to solving any problem is to make sure you are up-to-date. I did an apt-get update;apt-get upgrade and nothing was updated. So the next step, I went to the website and noticed a new release was made recently - I downloaded that and re-installed from scratch.
The new install is now running 2.6.32-5 kernel. Now, I haven't gotten to far into my testing on the new server yet, but I've noticed some strange results and was wondering what the best way to test for hard drive throughput is?
(EDIT: The new ProxMox using 2.6.32-5 kernel seems to give a much better performance than the 2.6.18-4 kernel did, but see below)
By strange, I mean I am seeing things like buffered reads (from pveperf and hdparm) show ~89MB/sec on the single hitachi drive (I haven't even gotten to the raid yet, I'm trying to create a big comparison matrix) under the host, but 160 MB/sec to 380 MB/sec under a VM... And the numbers in a VM can vary by as much as 100 MB/sec from one run to the next!
So far, my matrix includes dbench, bonnie++, hdparm, and just an good ole fashion dd if=/dev/zero of=blah bs=8k count=256k to measure speed.
The VM's only have 512MB of RAM allocated, and I'm trying to find what has the best performance...
OpenVZ or KVM
And then what options should I use: IDE or SCSI, LVM or NO LVM, RAW, QCOW, or VMDK?
If someone already has all of these answers, then please enlighten me.
Anyways, we've been having some big problems keeping up with disk I/O, so I thought while the server was down I'd play around a little bit.
I don't have the exact PVE version information, but the server (as it stood) was originally a v1.5 install that had been apt-get upgrade'd to v1.6 and was running a 2.6.18-4 kernel.
The hardware:
Gigabyte motherboard
AMD Phenom x4 9850 quad-core processor
8GB RAM
Highpoint 2640x4 RAID Controller card (I know, I have adaptec 5805z's here just waiting to be swapped out).
1 Hitachi 500GB SATA2 7,200 RPM Drive - Boot & Proxmox
1 Western Digital Black 2TB SATA2 7,200 RPM Drive - VM Backups
4 Segate 500GB SATA2 7,2000 RPM Drives in RAID 10 - Virtual Machines
In the host running dbench on the RAID10 I got ~160MB/sec throughput. In a OpenVZ container (we've been using openvz for everything since I thought it had less overhead) I got ~15MB/sec. There was nothing else running at the time.
So, OBVIOUSLY that's a problem. The first step to solving any problem is to make sure you are up-to-date. I did an apt-get update;apt-get upgrade and nothing was updated. So the next step, I went to the website and noticed a new release was made recently - I downloaded that and re-installed from scratch.
The new install is now running 2.6.32-5 kernel. Now, I haven't gotten to far into my testing on the new server yet, but I've noticed some strange results and was wondering what the best way to test for hard drive throughput is?
(EDIT: The new ProxMox using 2.6.32-5 kernel seems to give a much better performance than the 2.6.18-4 kernel did, but see below)
By strange, I mean I am seeing things like buffered reads (from pveperf and hdparm) show ~89MB/sec on the single hitachi drive (I haven't even gotten to the raid yet, I'm trying to create a big comparison matrix) under the host, but 160 MB/sec to 380 MB/sec under a VM... And the numbers in a VM can vary by as much as 100 MB/sec from one run to the next!
So far, my matrix includes dbench, bonnie++, hdparm, and just an good ole fashion dd if=/dev/zero of=blah bs=8k count=256k to measure speed.
The VM's only have 512MB of RAM allocated, and I'm trying to find what has the best performance...
OpenVZ or KVM
And then what options should I use: IDE or SCSI, LVM or NO LVM, RAW, QCOW, or VMDK?
If someone already has all of these answers, then please enlighten me.
Last edited: