Cant Start Containers after Proxmox Restart

randomuser1990

New Member
Feb 20, 2023
21
3
3
Hello everyone,

I recently restarted my Proxmox server and have been facing issues starting containers since then. When I try to start any container, I see the following error message on the summary page:

"can't open '/sys/fs/cgroup/lxc/103/memory.current' - No such file or directory (500)"

I am not sure what has caused this issue, but I suspect it may be related to a change in the cgroups configuration? I have tried restarting the Proxmox server multiple times, but the issue persists.

Running this in the host has this output:
root@<host>:~# lxc-start -n 103 -F -l DEBUG -o /tmp/lxc-103.log
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...

and the contents of /tmp/103.log can be found here https://pastebin.com/raw/GpERcv1M
(could not post here because of char limit per post)


pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-4
pve-kernel-5.15: 7.3-2
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-1
lxcfs: 5.0.3-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-2
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1


Has anyone else faced this issue before? If so, could you please share your experience and any solutions that you found?
 
Last edited:
Yes, if your container does have a systemd version that's too old to run in a cgroupv2-only environment, you need to use one of the workarounds mentioned in the documentation.
 
Yes, if your container does have a systemd version that's too old to run in a cgroupv2-only environment, you need to use one of the workarounds mentioned in the documentation.
I still don't understand why it worked for the first 2 weeks, during multiple restarts and everything.

Adding: systemd.unified_cgroup_hierarchy=0 to the grub works.

But would love to understand what changed to make it not work anymore. Really appreciate the time your taking to reply here btw :)
 
Last edited: