LXC containers fail to boot after upgrade

0nezer0

New Member
Oct 13, 2020
11
0
1
34
LXC will not start after upgrade to 7.0

Code:
root@pve:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.7-1
proxmox-backup-file-restore: 2.0.7-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1

Code:
arch: amd64
cores: 1
hostname: PiHoleLXC
memory: 512
nameserver: 1.0.0.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=4A:AF:F3:76:D8:CB>
onboot: 1
ostype: ubuntu
rootfs: NAS1:119/vm-119-disk-0.raw,size=24G
searchdomain: 1.1.1.1
swap: 512
unprivileged: 1
 
Here is the other information... PCT Start 119 does not return any information.
 

Attachments

  • 119Start.log
    20.1 KB · Views: 8
  • journalshort.txt
    292.9 KB · Views: 5
Which ubuntu version are you running in the container? There's not much to see in those files. LXC starts the container's init which just fails with not much info in between and the journal doesn't seem to contain any messages relating to container 119. Maybe you'll get more information when trying to start it in foreground via lxc-start -F -lDEBUG -o lxc-start.log -n 119
 
Which ubuntu version are you running in the container? There's not much to see in those files. LXC starts the container's init which just fails with not much info in between and the journal doesn't seem to contain any messages relating to container 119. Maybe you'll get more information when trying to start it in foreground via lxc-start -F -lDEBUG -o lxc-start.log -n 119
That was the exact command used for the log I uploaded. Also, that container is 18.04 I believe but I have tried 20.04, 20.10 and other versions available in templates.
 
Finally found the issue on LXC github.. this following command fixed my issue. Somehow when updating from 6.4 to 7.0 this got broken.

Code:
For the systemd container the systemd cgroup is missing that's why it won't start:

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o rw,nosuid,nodev,noexec,relatime,none,name=systemd cgroup /sys/fs/cgroup/systemd
and start the systemd container again.

That got the container to boot, but the status in the GUI still shows down. And a 500 Error for the memory.current file.

 
Last edited:
Finally found the issue on LXC github.. this following command fixed my issue. Somehow when updating from 6.4 to 7.0 this got broken.

Code:
For the systemd container the systemd cgroup is missing that's why it won't start:

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o rw,nosuid,nodev,noexec,relatime,none,name=systemd cgroup /sys/fs/cgroup/systemd
and start the systemd container again.
Proxmox 7.0 switched from cgroup to cgroup2 and cgroup isn't longer supported. So only LXCs will run that are compatible with cgroup2.
Maybe that has something to do with your problem.
 
Proxmox 7.0 switched from cgroup to cgroup2 and cgroup isn't longer supported. So only LXCs will run that are compatible with cgroup2.
Maybe that has something to do with your problem.
As I wrote in the previous thread, im having the issue even on non-systemd containers.. and even Ubuntu 20.04 templates.

AFTER running that command, I can get them to boot. The summary page still shows this:




Code:
can't open '/sys/fs/cgroup/unified/lxc/100/memory.current' - No such file or directory (500)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!