Cannot start containers after upgrade to 1.5

conrad

Member
Nov 20, 2008
110
0
16
edit:

booted into the 2.6.24 kernel, saw that openvz was not supported in 2.6.32.

But now kvm machines do not start. Stopping at gPXE message. But we have no pxe server.
Cannot press F12 (bootmenu)

......


We updated a production machine, looked fine but...

pm01:/var/log# vzctl start 105
Unable to open /dev/vzctl: No such file or directory
Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file.

it doesn't exist anymore in /dev

How can i fix this?
 
Last edited:
pve-manager: 1.5-9 (pve-manager/1.5/4728)
running kernel: 2.6.24-8-pve
proxmox-ve-2.6.32: 1.5-7
pve-kernel-2.6.32-2-pve: 2.6.32-7
pve-kernel-2.6.24-8-pve: 2.6.24-16
qemu-server: 1.1-14
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
ksm-control-daemon: 1.0-3
 
Please try to upgrade the 2.6.24 kernel to a newer version (pve-kernel-2.6.24-11-pve_2.6.24-23_amd64.deb is the latest available)
 
Dietmar,
i installed the latest kernel and kvm machines are starting now.
Only thing is that now suddenly the vnc consoles cannot connect to the host:5900
strange, but will look at that later...

Thanks for your help.
 
I'm having a similar problem, except that I am using a completely fresh build. I built the box (Dell PE 1850), and immediately upgraded to 2.6.32-2-pve. (I am trying to migrate off of a pure Debian box that I need to repurpose, and it has a 2.6.32 kernel as well.

In any case, I did the install and upgrade, did a dump of the containers on the original machine (vzctl chkpnt $x --dumpfile dump.$x), then tried to import them onto the proxmox machine and got

Code:
# vzctl restore 6 --dumpfile dump.2
Unable to open /dev/vzctl: No such file or directory
Please check that vzdev kernel module is loaded and you have sufficient permissions to access the file.
I searched for the vzdev module, but there was only one in /lib/moduLes/2.6.18 directory.

My pveversion -e:

Code:
pve-manager: 1.5-9 (pve-manager/1.5/4728)
running kernel: 2.6.32-2-pve
proxmox-ve-2.6.32: 1.5-7
pve-kernel-2.6.32-2-pve: 2.6.32-7
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-14
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
ksm-control-daemon: 1.0-3

Anyone got a fix? I've had so many frustrations with openvz on Debian lately that I am almost ready to abandon openvz and find some other option.

Thanks,
--vr
 
OK, this is making me strongly reconsider how we do our production servers. Both our primary and backup PVE machines mysteriously stopped running VMs and could not restart them until, after hours of reading and experimenting, I did an "aptitude install pve-kernel-2.6.24-11-pve" to change my kernel from version 2.6.24-8-pve, and suddenly everything worked. Now, maybe I missed something important: I always assumed that if I was faithful to just do an aptitude update / upgrade regularly, I'd end up with everything up to date. But here there's a version "-11" and my up-to-date server says that "-8" is the newest thing. What's up with that? I also note the comments about version .32 having issues. So, kind of drives a stake through my naive assumption that if I just keep updating things, it'll be fine.
 
I guess there is a hint in the apt-get log?

No, not actually, nothing I can see in the apt-get and aptitude logs previous to updating to -11 or in the one where I did update to it, by explicitly using aptitude to tell it to install kernel -11.
I don't mean to complain too loudly since I chose to try running our production servers on a free product. I guess I misunderstand how to use aptitude to make sure our PVE hosts have all the updates they "should" have: I thought "aptitude update" + "aptitude upgrade" were enough.
 
I expect that would work, although usually I can tell from the messages I see when doing a "regular" update/upgrade, that there are packages held back that will only be installed with a full upgrade. I didn't see any of those when I did routine upgrades while running kernel -8. Maybe it has to do with "stable" vs. "beta" upgrades. In any case, I will be more conservative about updating things since keeping the server running is paramount.

One of our other I.T. people pointed out to me something obvious: since I have backup hardware running PVE, I could test upgrades on it before doing them on the main hardware.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!