New 2.6.32 Kernel with stable OpenVZ (pvetest)

Just uploaded a new kernel which should fix that (should respect BIOS settings now). Also updated igb and ixgbe drivers.

Please can you test?

great - now the 2.6.32-6 works without boot parm "pcie_aspm=off", thanks!
 
Hi,

I've already tried 2.6.35, 2.6.32-4 and older and had problems with stability during vzdump (lvm snapshot) and qmrestore. Literally virtual machines were freezing during dump/restore.

After update to pvetest the problem vanished and I feel that servers are more stable under heavy load.

Thanks. Good job.
 
I have very superficially tested the 2.6.32-42 kernel from "pvetest" repo. The only major thing that I have to notice. Now, new smbfs v. 4.5 from Debian Squeeze repo does work on this kernel. Before cifs-utils and other samba libraries from 4.5 branch were incompatible with 2.6.32 OpenVZ-specific kernels. See here for details: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=612911 . Now the issue is fixed.
 
And same version of vzctl?

used both kernel with vzctl: 3.0.28-1pve5

now i have installed the follow and works without problems

Code:
proxmox02:~# pveversion -v
pve-manager: 1.8-22 (pve-manager/1.8/6531)
running kernel: 2.6.18-6-pve
proxmox-ve-2.6.18: 1.8-15
pve-kernel-2.6.18-6-pve: 2.6.18-15
qemu-server: 1.1-31
pve-firmware: 1.0-13
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.28-1pve5
vzdump: 1.2-15
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-15
 
Last edited:
Please try to find a way to reproduce the bug, so that I can debug it here.

today i have debug a little bit my network settings on server1 and server2. There i found, that somithing was wrong in the /etc/network/interface settings on server1 (bond0.2 was not loaded). I have correct this problem and now running the follow configuration:

Code:
proxmox01:~# pveversion -v
pve-manager: 1.8-23 (pve-manager/1.8/6533)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.8-42
pve-kernel-2.6.32-6-pve: 2.6.32-42
qemu-server: 1.1-31
pve-firmware: 1.0-13
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.28-1pve5
vzdump: 1.2-15
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.0-6

all works fine on bothe server now with the openvz container.
 
Weird as I did upgrade and all my debian and ubuntu VPS had their /dev/ almost empty. CentOS VPS worked fine though. I am absolutely sure that:
- all containters worked before on stock proxmox 1.8 kernel
- some of these debian/ubuntu systems were not updated for a long long time

The most obvious consequences were that most services were unable to start and I would be unable to enter the machines:
Code:
# vzctl enter 205
enter into CT 205 failed
Unable to open pty: No such file or directory
Now I found a script somewhere and made a few tweaks to it, so that it would work:
Code:
#!/bin/bash


cd /dev/
MAKEDEV ptmx
MAKEDEV pty
MAKEDEV core
ln -sf /proc/self/fd /dev/fd
MAKEDEV full
MAKEDEV kmem
MAKEDEV mem
MAKEDEV port
MAKEDEV ptmx
MAKEDEV ram
MAKEDEV random
MAKEDEV shm
MAKEDEV urandom
MAKEDEV zero

mkdir /dev/pts
mount -t devpts devpts /dev/pts
Made a few corrections there (most importantly, MAKEDEV systax was incorrect earlier, so the above version works unlike the original one). Also I added the last two lines, otherwise /dev/pts will not function and I would be unable to enter.

I call the script above from /etc/openvz.conf (just before the last two lines - before "init 2").

So for now it seems that I fixed it, BUT I would really want this not happening so that I would not need to manually edit all my containers after upgrade. I tried disabling udev and many other things from the net, but the problem still persisted. On the other hand, all my CentOS systems were able to start correctly even though they do have udev daemon...

Update:
I forgot to put pveversion output just in case anybody wonders:
Code:
pve-manager: 1.8-23 (pve-manager/1.8/6533)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.8-42
pve-kernel-2.6.32-6-pve: 2.6.32-42
qemu-server: 1.1-31
pve-firmware: 1.0-13
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.28-1pve5
vzdump: 1.2-15
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.0-6

The system which works is as follows:
Code:
pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.28-1pve1
vzdump: 1.2-14
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6
 
Last edited by a moderator:
Can you define a reproducible test case with a Debian template? So far all upgrade tests in our test labs worked without issues.
 
Interesting. It seems that I can reproduce it once upstart update procedure puts /etc/init/mountall.conf file in place. Try it on your Debian template - it probably lacks some of the /etc/init scripts related to mounting filesystems. I am pretty sure that the mountall script did not do anything bad previously... My last post is a cure if mountall.conf file is in place - once I remove the file, everything is working just fine without the previously mentioned script.
 
Did you check that this issue is related to the latest kernel upgrade, I mean did you test also the latest stable? pls note, there is also a new vzctl, so keep this in mind if you compare different systems.
 
Hi,

I would guess that this problem is related to vzctl and how it mounts filesystems. I can't reboot the system right now to downgrade the kernel, but I guess at least I don't have a huge problem with it anymore. No templates contain mountall.conf scripts anyway - they are added by upgrade procedure, which is asking if it is allowed to add the file there. I will try later.

Another question: will it be possible to use vSwap with this version? The Proxmox UI did not change with this upgrade and the value marked as "swap" is still added to the RAM of container.
 
we try to add vswap in 2.0.
 
so you will not add this simple small feature to the 1.8? :( the kernel supports this, so it is just a matter of changing which variable is changed by the "swap" value in UI...
 
most OpenVZ users works with our stable 2.6.18 kernel, some with the current 2.6.32 and vswap does not work on both. so think twice before you write such "simple" suggestions, breaking almost all productive OpenVZ installations ...

so the 2.0 is the target for new features which are only available on the RHEL6 branch.
 
Last edited:
most OpenVZ users works with our stable 2.6.18 kernel, some with the current 2.6.32 and vswap does not work on both. so think twice before you write such "simple" suggestions, breaking almost all productive OpenVZ installations ...

so the 2.0 is the target for new features which are only available on the RHEL6 branch.
What about "if ... else..."? Really that's quite "simple"...
 
probably simple, but the 1.x release are feature freezed already, but if you provide a tested patch our dev´s will take a look on it.
 
Could you point me to a .pm where memmory limits are calculated? I can't find it.

--edit
According to OpenVZ wiki (http://wiki.openvz.org/VSwap) this should be really easy. It seams that the swappages are ignored if not supported and treated as primary when they are set. So the only thing would be to add additional swappages = swap_size_in_pages and we should be fine.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!