[SOLVED] KVM + LVM + DRBD + SSD low performance

Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

Alright, at my last job we had a similar problem with storage performance inside VMs. Outside the VMs on the hypervisor level we had huge performance when doing I/O tests but inside the VMs it was miserable.

Mind you this was not a Promox environment but a Ubuntu OpenStack environment using KVM for visualization.

My colleague at the time worked very hard to find a solution and he finally managed to solve it by implementing the vhost_net kernel module. I believe this is standard in the 3.x kernel but not implemented in the 2.6.x kernels that are used by Proxmox.

After implementing the vhost_net kernel module we saw an increase of 100-300 MB/sec write and read speeds in the VMs (I am not joking the difference was this huge).

Unfortunately I don't know how my colleague fixed the issue exactly but I suspect you might have the same problem since upgrading to 3.x kernel solved it for you.

Here are some links for some extra reading I suggest you really look into this as I highly suspect the lack of vhost_net might have been the cause for your performance issues.

https://blog.codecentric.de/en/2014/09/openstack-crime-story-solved-tcpdump-sysdig-iostat-episode-3/
http://docs.openstack.org/kilo/config-reference/content/kvm.html (Go to bottom "KVM performance tweaks")
http://www.linux-kvm.org/page/UsingVhost

I have no doubt that seem similar issue, because the problem were exactly the same. I thinks that the better way in this case to solve it is updating kernel if there is no VM with openvz working.
Still searching info about trim our drbd configuration, I have see nothing about this in DRBD documentation for now, is someone else have info about this?
 
Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

Hi

The last things that I had to do, is apply the trim/discard in our configuration, do you know how to do it?

Tanks


you need to use virtio-scsi disk, and enable discard on the disk in gui.

But I don't known if drbd is working with discard ?
 
Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

LVM does and is active by default.

Works in DRBD since 8.4.4:

Source: http://drbd.linbit.com/home/roadmap/

As say Mir, it's an new feature from 8.4.4 version, is just for that that I update from 8.4.3 to 8.4.5...
I have Vitrio disk and vitrio scsi active, and have enable discardon the disk in gui, there is any special test to test if trim is working ok (fstrim -v / in virtual still failing)? There is nothing to active in drbd configuration?
 
Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

How did you update exactly? The drbd-module is part of pve-kernel-3.10.0-11-pve.
 
Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

How did you update exactly? The drbd-module is part of pve-kernel-3.10.0-11-pve.

He

I update firstly kernel, with this, I solve the performance problem.
Upgrading kernel, drbd is updated to version 8.4.3 that haven't trim support. So I build and install the last version, 8.4.5 that have trim support.
 
Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

you need to use virtio-scsi disk, and enable discard on the disk in gui.

But I don't known if drbd is working with discard ?

Sorry Spirit, I misunderstood what you say before.... I have "vitro" for disks and "vitro" for scsi controller type.... changing "scsi" for disks, I can trim now in VM... but it's somethings that also working in others VMs where I have an smaller version of DRBD without support of trim (8.3.9) and also work.... So I don't know if it's all good and DRBD do trim automatically on disk... As I understand, it's needed to trim all "layers", local disk, LVM, drbd, and disk in VM, it's true?


The fio seem a few worst but still acceptable with scsi instead vitro disk

READ: io=3071.7MB, aggrb=53926KB/s, minb=53926KB/s, maxb=53926KB/s, mint=58327msec, maxt=58327msec
WRITE: io=1024.4MB, aggrb=17983KB/s, minb=17983KB/s, maxb=17983KB/s, mint=58327msec, maxt=58327msec

So the good way is to choose scsi disk for SSD disk and trim in each VM with discard option in LVM (is the default option)? As you can see, I'am a few confused about this :)...
 
Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

I have an smaller version of DRBD without support of trim (8.3.9) and also work.... So I don't know if it's all good and DRBD do trim automatically on disk... As I understand, it's needed to trim all "layers", local disk, LVM, drbd, and disk in VM, it's true?
If DRBD < 8.4.4 the trim command will be ignored by DRBD (a noop).
 
Re: KVM + LVM + DRBD + SSD low performance : SOLVED!!!

If DRBD < 8.4.4 the trim command will be ignored by DRBD (a noop).

Ok, so I understand that if I correctly trim my VMs and have DRBD 8.4.5, I have nothing more to do....
Thanks very much!!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!