Proxmox VE with newer kernels (KVM only)

mangoo

Member
Feb 4, 2009
198
0
16
wpkg.org
I know that it's not supported, but I was wondering if someone is using Proxmox VE with a newer kernel (for KVM only, no OpenVZ).

I have some strange network connectivity problems relating to tg3 module (host not reachable, tcpdump on the host shows absolutely no traffic, bringing down the network and removing the tg3 module results in a kernel panic) - perhaps these problems are fixed if I use latest & greatest kernel.

Will KVM in Proxmox VE work if I compile the kernel myself? Or is there anything special I should be aware of?
 
I know that it's not supported, but I was wondering if someone is using Proxmox VE with a newer kernel (for KVM only, no OpenVZ).

I have some strange network connectivity problems relating to tg3 module (host not reachable, tcpdump on the host shows absolutely no traffic, bringing down the network and removing the tg3 module results in a kernel panic) - perhaps these problems are fixed if I use latest & greatest kernel.

Will KVM in Proxmox VE work if I compile the kernel myself? Or is there anything special I should be aware of?

did you try the kernel from 1.4 beta (see pvetest repo)?
 
yes. FYI, next week we will test a new KVM version here, hopefully ready to release soon.

BTW - will this release give possibility to not use "fairsched" for KVM, as we discussed on the mailing list once?

In the line of:

'-cpuunits=0' (0 == ignore)

I'm not sure if the issue I'm describing here does not come from this one as well (in addition to problems it gives to guests running virtio_net).
 
BTW - will this release give possibility to not use "fairsched" for KVM, as we discussed on the mailing list once?

In the line of:

'-cpuunits=0' (0 == ignore)

I'm not sure if the issue I'm describing here does not come from this one as well (in addition to problems it gives to guests running virtio_net).

I just uploaded the new kernel and new kvm to the pvetest repository. There is also a new ISO which includes those packages (1.4 beta2). Please can you test it the bug is still there?
 
I just uploaded the new kernel and new kvm to the pvetest repository. There is also a new ISO which includes those packages (1.4 beta2). Please can you test it the bug is still there?

The bug is still there (guest slow when using virtio_net).

Please, do not apply fairsched patch to KVM as it is simply broken with it - or provide an option to disable cpuunits/fairsched for KVM guests.

As discussed on the mailing list and privately ("CPU units" breaks virtio - in June), the fairsched patch has little or no effect for KVM, as KVM is essentially one single process (OpenVZ guest processes are many and visible by the host) and will not starve other guests even when using lots of CPU.

And the patch does break KVM with virtio_net - with Proxmox VE, virtio_net, instead of adding extra performance, just gives bad taste and leaves the impression that KVM is "slow".
 
The bug is still there (guest slow when using virtio_net).

The problem is that I still have now way to reproduce that behaviour reliable?

Please, do not apply fairsched patch to KVM as it is simply broken with it - or provide an option to disable cpuunits/fairsched for KVM guests.

I will provide a way to disable it.

As discussed on the mailing list and privately ("CPU units" breaks virtio - in June), the fairsched patch has little or no effect for KVM, as KVM is essentially one single process (OpenVZ guest processes are many and visible by the host) and will not starve other guests even when using lots of CPU.

Withou the patch, all KVM runs in VE0, which has assigned 1000 CPUUnits by default - so all KVM guest are limited to the CPU usage of a single OpenVZ VM - I would say that is an effect - a really bad one.
 
The problem is that I still have now way to reproduce that behaviour reliable?

Yes, that's problematic here - there is just no easy way to reproduce it with a simple script like:

# ./cause-slowness-with-virtio_net.sh

running for 5 minutes ;)

In practice, it usually takes a couple of days until I hit it.
It first happens for guests which do lots of IO and use CPU a bit.
I observed it on:
- more or less busy webservers,
- more or less busy MTAs,
- fileservers,
- but I can reproduce it quite reliably (and faster than on the above hosts) on a backup server running BackupPC, which uses storage connected over iSCSI (iSCSI connected on KVM guest, not host).

They all have in common that they send relatively lots of packets; the backup server hits the issue much faster - it sends much more packets and uses much more CPU when compared to the rest of the servers hitting the problem.

So probably as some CPU cycles / packet threshold is met in a given time, a given KVM process is penalized. No hard proof of that, and no idea why it only happens for guests with virtio_net. Something too technical for me.


I will provide a way to disable it.

Great!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!