Users running OpenVZ on 2.6.32 - need your input

gkovacs

Renowned Member
Dec 22, 2008
512
50
93
Budapest, Hungary
We are running several Proxmox VE servers, currently on 2.6.24, planning to upgrade to 2.6.32. As there are several reported (minor) stability and performance problems, we are in no rush to go ahead as the current kernel is quite stable for us.

If you are running a mixed OpenVZ/KVM environment on 1.6/2.6.32 (especially if you have upgraded from 1.5/2.6.24) please share your experience by answering the following questions:

- did the upgrade (especially kernel) happen without any problems?
- are your systems generally stable since the kernel upgrade?
- how is load on OpenVZ guests and the host (has load generally gone up or down) ?
- how is system-wide IO responsiveness, especially when vzdump is running?
- have you noticed anything particular after the upgrade (not limited to performance) ?
- did anything break after the upgrade that you had to solve?

Don't forget to include short specification of your hardware (this is ours):
Intel G45 chipset, Core2Quad CPU, 8GB RAM, Adaptec 2610 controller, 6x 250GB SATA RAID10
 
Last edited:
:) I'm lazy so I will just say this...
After upgrade on my older test server most problems (v1.5) went away.
I did notice that VZDump is slow. (I don't care. I use BackupPC anyway. The whole VM backup's are just too heavy/slow other wise to do daily. Weekend backups with vzdump.)
 
Currently running 1.4/2.6.24 using both KVM and OpenVZ in production. I have not upgraded but I'm testing 1.6/2.6.32 on some new hardware. I cannot compare performance wise as the new hardware is faster. So far only annoyances is missing live migration of openvz (compared to 2.6.18) and missing "cpu limits" as well.
 
I have upgraded to 1.6/2.6.32. So far i works well. I basically run Windows 2003 Servers in KVM. So far I works well this week. But it's to early to say. If you ask me in 6 months I will give you a better answer. :)
The performance is about equal to 2.6.24. And now Windows 2008 Servers with virtio finally runs well (as in 2.6.18) with the newest drivers. With 2003 servers you have to stick with older virtio drivers.
 
I just install 1.6 with 2.6.32 on INTEL S5520UR, CPU Intel Xeon E5520 2.27GHz, 12G RAM, then a openVZ Debian 6, with horrible network performance i decide to switch from vnet to bridge, but as soon as i start to tranfer data on bridge mode the machine freeze up:

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel:Oops: 0010 [#1] SMP

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel:last sysfs file: /sys/devices/virtual/net/lo/operstate

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel:Stack:

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel:Call Trace:

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel: <IRQ>

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel: <EOI>

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel:Code: Bad RIP value.

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel:CR2: 0000000000000000

Message from syslogd@a54 at Oct 21 18:17:01 ...
kernel:Kernel panic - not syncing: Fatal exception in interrupt
I decide to go back on 2.6.18 on the same machine still with 1.6, start OpenVZ container on bridge mode and everithing goes nice again :D, i hope my experience help.

greets,
 
I start using 2.6.32 this 2 weeks. One server works ok (Xeon 3340). Today a migrate a Cluster with one openvz only: live migration works, and no issue yet,
Another server is upgrate 2 day, second day vz stop respond, i need reboot all server and return to 2.6.18
 
If you are running a mixed OpenVZ/KVM environment on 1.6/2.6.32 (especially if you have upgraded from 1.5/2.6.24) please share your experience by answering the following questions:

- did the upgrade (especially kernel) happen without any problems?
- are your systems generally stable since the kernel upgrade?
- how is load on OpenVZ guests and the host (has load generally gone up or down) ?
- how is system-wide IO responsiveness, especially when vzdump is running?
- have you noticed anything particular after the upgrade (not limited to performance) ?
- did anything break after the upgrade that you had to solve?

Recently I have upgraded up to ProxMox 1.7 and kernel 2.6.32-4-pve from 2.6.24. Answering your questions.

1. No problems during upgrade.
2. No, they have not become stable. I have got a lot of troubles, described here. Now I plan either downgrade to 2.6.24 kernel (now unsupported) or migrate to Citrix Xen server. Not decided yet.
3. The load to guests is quite low. Hardware was chosen with great performance reserve for my tasks. I have periodic peaks, but all they can be easily explained.
4. I can't notice the difference. Never had problems with IO. I use hardware RAID 0+1, it's fast enough.
5. Yes, I have noticed that:
a) DRBD and OCFS2 became to operate a bit more stable and smart.
b) Implementation of GRE protocol stack is now supported inside VEs. I use it.
6. Yes. See item #2.

Hardware:
HP DL 380 G5 servers, Intel(R) Xeon(R) CPU E5430 @ 2.66GHz, 10 Gb RAM, hardware (real) RAID 0+1 built on 6*137 Gb physical SAS HDDs.
 
We are running several Proxmox VE servers, currently on 2.6.24, planning to upgrade to 2.6.32. As there are several reported (minor) stability and performance problems, we are in no rush to go ahead as the current kernel is quite stable for us.

If you are running a mixed OpenVZ/KVM environment on 1.6/2.6.32 (especially if you have upgraded from 1.5/2.6.24) please share your experience by answering the following questions:

- did the upgrade (especially kernel) happen without any problems?
- are your systems generally stable since the kernel upgrade?
- how is load on OpenVZ guests and the host (has load generally gone up or down) ?
- how is system-wide IO responsiveness, especially when vzdump is running?
- have you noticed anything particular after the upgrade (not limited to performance) ?
- did anything break after the upgrade that you had to solve?

Don't forget to include short specification of your hardware (this is ours):
Intel G45 chipset, Core2Quad CPU, 8GB RAM, Adaptec 2610 controller, 6x 250GB SATA RAID10

Om not only running OpenVz but I can still help out.

1. No. On a HP Proliant DL380 G6 server I had to upgrade the firmware by using the latest firmware bootcd. Afterwards Promox started with no problems. I still have some issues if I install the HP Debian Tool. So I don't use it now. Now the server have been running stable for 2 months since the upgrade. On a HP DL320 G3 server I had to use the 2.6.18 kernel to get it all working. The same issue related to firmware apply to all HP servers. So I advice you to upgrade firmware when upgrading Proxmox on HP servers.

2. Yes.

3. Low

4. Good.

5. No. Only better performance and stability releated to KVM servers. OpenVz servers
running as before.

6. Yes. Generelly hardware related.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!