I've read some posts regarding IO performance of KVM guests and I got the feeling almost nobody did a decent benchmark / test before posting complaints. So below are some ideas to think about.
Recently I did some testing with KVM on ubuntu hosts and we did some benchmarking and tweaking to get maximum performance. In our tests we used virtio only as storage driver...the other options we're all slower. After benchmarking from the hosts and from the guest we decided to change the ioscheduler of the host from CFQ to DEADLINE...of course it is depending on your storage backend but in our case (Areca HW raid) the performance of the guests was much more responsive and faster.
About benchmarking, it isn't really fair to expect the same performance from the guest as you see from the host... there's a lot of software layers, queues, caches and stuff in between. When we benchmark storage solutions we always run tests on multiple vm's simultanously and in case of SAN/NAS storage preferably from multiple hosts at the same time. Just a hdparm isn't really a decent benchmark... Tools like bonnie, iozone and intels iometer can give you great info when used right. For MB/s our kvm tests resulted in 170MB/s on the host and almost 90MB/s in a vm...not bad... When we ran the test in two vm's they both achieved almost 70MB/s. Together 140MB/s...not bad at all...
Some tips for benchmarking storage solutions :
Werner Reuser
XL-Data Hosting, Virtualization & Storage Solutions
Recently I did some testing with KVM on ubuntu hosts and we did some benchmarking and tweaking to get maximum performance. In our tests we used virtio only as storage driver...the other options we're all slower. After benchmarking from the hosts and from the guest we decided to change the ioscheduler of the host from CFQ to DEADLINE...of course it is depending on your storage backend but in our case (Areca HW raid) the performance of the guests was much more responsive and faster.
About benchmarking, it isn't really fair to expect the same performance from the guest as you see from the host... there's a lot of software layers, queues, caches and stuff in between. When we benchmark storage solutions we always run tests on multiple vm's simultanously and in case of SAN/NAS storage preferably from multiple hosts at the same time. Just a hdparm isn't really a decent benchmark... Tools like bonnie, iozone and intels iometer can give you great info when used right. For MB/s our kvm tests resulted in 170MB/s on the host and almost 90MB/s in a vm...not bad... When we ran the test in two vm's they both achieved almost 70MB/s. Together 140MB/s...not bad at all...
Some tips for benchmarking storage solutions :
- Be sure the benchmark creates larger files than your ram, else your testing ram. Sometimes it can be a good idea to limit the amount of ram to the minimal needed and reboot.
- Always run benchmarks multiple times and see if they tell you the same, be sure the server is in the same state every benchmark, reboot if needed.
- Benchmark different block / record sizes, what you really want to see is not the MB/s in the most optimal condition, but how many IO's per second you get...that's what matters.
- Create some scripts to be able to test some simultanous benchmarks in different vm's... When your solution is running that is what happens in real life.
- Always test a SAN/NAS solution from multiple hosts simultanous. The performance can be really great when just one host is connected but in most cases it will drop when multiple hosts send IO's to it... smart solutions are able to use their cache and do some reordering of IO's in their queues but some don't. Dual controller units have to mirror cache memory, that can also be a reason why they slow down.
- Read something about ioschedulers, caches and filesystems. They all have tuning options that could be usefull... however be carefull not to change too much... most of it functions really well in default config.
- Read about aligning lvm to the stripe size of your raid storage... you can create your PV with a custom metadata size to align your PV to the stripe size of your raid set. Doing so will help you in sending more full stripe writes to your array, full stripe writes are faster since no data has to be read first. Not aligning your PV's results in a lot less full stripe writes en degrades write performance.
Werner Reuser
XL-Data Hosting, Virtualization & Storage Solutions