KVM kills SSD performance

tincboy

Renowned Member
Apr 13, 2010
466
5
83
I've just installed Proxmox on my new server which has a RAID 10 containing 4 ADATA 256 GB SSD drives,
I've tried to test out write speed on my new server which wrote by speed of 538 MB/s, but running the same test on a KVM VM on top of the same disk, show 1/4 performance
I'm using VIRTIO for disk and I'm in debt if there's any kind of configuration which can fix this issue
Anyone familiar with improving IO performance of VMs by SSD drives?
 
What filesystem and mount option to the filesystem?

If filesystem is ext4 use barrier=0 and avoid discard as this will make a sync for every write.
instead add this to /etc/cron.daily

$ cat /etc/cron.daily/fstrim
#!/bin/sh


PATH=/bin:/sbin:/usr/bin:/usr/sbin


ionice -n7 fstrim -v /


ionice -n7 fstrim -v /var/lib/vz
 
I'm using CentOS 6 x64 as guest OS, and the VM has 16GB memory with 16 cores of Xeon CPU,
I guess if I could make the VM to have direct IO access to underlying storage device the issue can be fixed, isn't it?
Is it possible in proxmox to directy connect the device to VM?
 
I'm using CentOS 6 x64 as guest OS, and the VM has 16GB memory with 16 cores of Xeon CPU,
I guess if I could make the VM to have direct IO access to underlying storage device the issue can be fixed, isn't it?
Is it possible in proxmox to directy connect the device to VM?

technicaly, it's possible to use new x-data-plane option in kvm command line

-device virtio-blk-pci,drive=drive-virtio0,id=virtio0,.......,x-data-plane=on .

(it's not supported yet in proxmox gui).

benchmark show around 1millions iops.
but some feature won't work anymore (live backup,move disk, disk io throttling,...)

 
I'm using CentOS 6 x64 as guest OS, and the VM has 16GB memory with 16 cores of Xeon CPU,
I guess if I could make the VM to have direct IO access to underlying storage device the issue can be fixed, isn't it?
Is it possible in proxmox to directy connect the device to VM?
What kind of RAID is used? Is this hardware/software RAID?
 
It's Supermicro RAID 10, I don't guess it's something wrong with RAID controller. because in the host OS, Performance is fine.
 
Guest is using ext3 too, and I'm using the same dd command to perform the test
It's hardware raid and it's model is: "Supermicro AOC-USAS2LP-H8iR"
 
What cache mode is used for VM drive? Default is cache=none.

Try on both (host and VM):
dd if=/dev/zero of=/some/dir/testfile bs=1M count=1000
dd if=/dev/zero of=/some/dir/testfile bs=1M count=1000 oflag=direct

Also try on VM (need to create a spare virtual drive):
dd if=/dev/zero of=/dev/vdX bs=1M count=1000
dd if=/dev/zero of=/dev/vdX bs=1M count=1000 oflag=direct
 
Last edited:
technicaly, it's possible to use new x-data-plane option in kvm command line

-device virtio-blk-pci,drive=drive-virtio0,id=virtio0,.......,x-data-plane=on .

(it's not supported yet in proxmox gui).

benchmark show around 1millions iops.
but some feature won't work anymore (live backup,move disk, disk io throttling,...)

I was not able to use this switch, would you please let me know complete sample of it's usage with Proxmox 3.2 ?
 
Code:
-device virtio-blk-pci,drive=drive-virtio0,id=virtio0,.......[/I][/COLOR][COLOR=#545454][I][FONT=arial],[/FONT][/I][/COLOR][COLOR=#545454][I][FONT=arial][B]x[/B][/FONT][/I][/COLOR][COLOR=#545454][I][FONT=arial]-[/FONT][/I][/COLOR][COLOR=#545454][I][FONT=arial][B]data-plane[/B][/FONT][/I][/COLOR][COLOR=#545454][I][FONT=arial]=on[/FONT][/I][/COLOR][COLOR=#545454][I][FONT=arial] .
[/FONT]
I was not able to use this switch, would you please let me know complete sample of it's usage with Proxmox 3.2 ?​



 
You maybe need to reactivate your LVM volume with:

Code:
lvchange /dev/lvm-ssd-01/vm-210-disk-1 -a y
My (working) /etC/pve/qemu-server/210.conf looks like:
Code:
args: -drive file=/dev/lvm-ssd-01/vm-210-disk-1,if=none,id=drive-virtio1,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,scsi=off,config-wce=off,x-data-plane=on
 
You maybe need to reactivate your LVM volume with:

Code:
lvchange /dev/lvm-ssd-01/vm-210-disk-1 -a y
My (working) /etC/pve/qemu-server/210.conf looks like:
Code:
args: -drive file=/dev/lvm-ssd-01/vm-210-disk-1,if=none,id=drive-virtio1,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb,scsi=off,config-wce=off,x-data-plane=on
It helped so much,
Would you please let me know what cache method is better for this kind of attach device?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!