Proxmox VE 7 Upgrade Issues

0nezer0

New Member
Oct 13, 2020
11
0
1
34
After upgrading my Proxmox from 6.4 to 7.0 I have an issue with my containers. I tried to create a new container with the 20.04 Ubuntu template as well, to ensure that the issue wasn't an old OS and my cgroup.

Here is the return on a start of any container:
Code:
lxc-start -n 119 -F -l DEBUG -o ~/lxc-119.log
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...

Inside the container:

Code:
lxc-console: 100: tools/lxc_console.c: main: 131 100 is not running

Attached are logs for both. The 100 CT is the 20.04 template, that was freshly downloaded. 119 is the other existing CT.
 

Attachments

Last edited:
Tried to update-grub and make sure it wasn't a kernel issue; seems like this is the latest one installed. I still have no idea what the error is telling me because I tried to force cgroup:rw:force in the lxc common config to no avail. tons of other errors. hopefully someone can point me in the right direction.

Code:
proxmox-ve: 7.0-2 (running kernel: 5.4.128-1-pve)
pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.7-1
proxmox-backup-file-restore: 2.0.7-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-12
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
you are still booting the PVE 6 kernel..

proxmox-ve: 7.0-2 (running kernel: 5.4.128-1-pve)
 
you are still booting the PVE 6 kernel..

proxmox-ve: 7.0-2 (running kernel: 5.4.128-1-pve)
It was the latest kernel available in the list on boot. Let me see what's going on.

I have bullseye repos (no sub) and that is the latest kernel in the list.
Code:
root@pve:~# apt-cache search pve-kernel*
pve-firmware - Binary firmware code for the pve-kernel
pve-kernel-5.10.6-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.0-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.12-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.17-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.21-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.22-1-pve-dbgsym - The Proxmox PVE Kernel debug image
pve-kernel-5.11.22-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.22-2-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.22-3-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11.7-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.11 - Latest Proxmox VE Kernel Image
pve-kernel-helper - Function for various kernel maintenance tasks.
pve-kernel-libc-dev - Linux support headers for userspace development
pve-kernel-5.3 - Latest Proxmox VE Kernel Image
pve-kernel-5.3.10-1-pve - The Proxmox PVE Kernel Image
pve-kernel-5.3.18-3-pve - The Proxmox PVE Kernel Image
pve-kernel-5.4 - Latest Proxmox VE Kernel Image
pve-kernel-5.4.128-1-pve - The Proxmox PVE Kernel Image
 
Last edited:
Good morning,

on my 7.0-System pve-kernel-5.11 is installed. It has just dependencies to the latest kernel and currently it pulls in pve-kernel-5.11.22-3-pve. That meta-package will always pull the latest "5.11"-Kernel, which is great :)

You should not install actual kernels like pve-kernel-5.4.128-1-pve manually - except you have a specific reason.

Best regards

Added to reduce my own confusion: Package versions and kernel versions do not always match. On my system I can see:
Code:
root@pvea:~# apt search pve-kernel-5 | grep installed
pve-kernel-5.11/stable,now 7.0-6 all [installed]
pve-kernel-5.11.22-1-pve/stable,now 5.11.22-2 amd64 [installed]
pve-kernel-5.11.22-2-pve/stable,now 5.11.22-4 amd64 [installed,automatic]
pve-kernel-5.11.22-3-pve/stable,now 5.11.22-6 amd64 [installed,automatic]
Check "22-2"! Is it only me who finds this confusing?
 
Last edited:
I had rebooted into that kernel on purpose to see if anything changed. I am back to 5.11 now with no change.

Code:
root@pve:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.7-1
proxmox-backup-file-restore: 2.0.7-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
root@pve:~#



root@pve:~# lxc-start -n 119 -F -l DEBUG -o ~/debug119.log
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...


I got DEVUAN CT to semi boot... still with lots of permission errors; don't know what in the world happened after the upgrade.


Devuan boot:
Code:
[info] Using makefile-style concurrent boot in runlevel S.
mount: /sys/fs/pstore: cannot mount pstore read-only.
mount: /sys/kernel/config: cannot mount configfs read-only.
bootlogd: cannot deduce real console device
[info] Not setting System Clock.
[ ok ] Activating swap...done.
mount: /: cannot remount /dev/loop0 read-write, is write-protected.
mount: /run: cannot remount tmpfs read-write, is write-protected.
mount: /run/lock: cannot remount tmpfs read-write, is write-protected.
mount: /proc: cannot remount proc read-write, is write-protected.
mount: /sys: cannot mount sysfs read-only.
mount: /dev/shm: cannot remount tmpfs read-write, is write-protected.
mount: /dev/pts: cannot remount devpts read-write, is write-protected.
[warn] Fast boot enabled, so skipping file system check. ... (warning).
 
Last edited:
@0nezer0 could you open a new thread with
- pveversion -v
- container config
- full system boot logs (journalctl -b
- full task log of a regular pct start
- full output and content of log file of a debug start of the container

thanks!