LXC container "memory.current - No such file or directory"

bernhardp

New Member
Jul 10, 2023
4
0
1
Hello, after rebooting my proxmox 7 my lxc container were not showing status information:

Code:
root@newton:~# pct list
can't open '/sys/fs/cgroup/lxc/103/memory.current' - No such file or directory

root@newton:~# ls -l /sys/fs/cgroup/lxc/103/
total 0
-r--r--r-- 1 root root 0 Jul 11 09:04 cgroup.controllers
-r--r--r-- 1 root root 0 Jul 11 09:04 cgroup.events
-rw-r--r-- 1 root root 0 Jul 11 09:04 cgroup.freeze
--w------- 1 root root 0 Jul 11 09:04 cgroup.kill
-rw-r--r-- 1 root root 0 Jul 11 09:04 cgroup.max.depth
-rw-r--r-- 1 root root 0 Jul 11 09:04 cgroup.max.descendants
-rw-r--r-- 1 root root 0 Jul 11 09:04 cgroup.procs
-r--r--r-- 1 root root 0 Jul 11 09:04 cgroup.stat
-rw-r--r-- 1 root root 0 Jul 11 09:04 cgroup.subtree_control
-rw-r--r-- 1 root root 0 Jul 11 09:04 cgroup.threads
-rw-r--r-- 1 root root 0 Jul 11 09:04 cgroup.type
-rw-r--r-- 1 root root 0 Jul 11 09:04 cpu.pressure
-r--r--r-- 1 root root 0 Jul 11 09:04 cpu.stat
-rw-r--r-- 1 root root 0 Jul 11 09:04 io.pressure
-rw-r--r-- 1 root root 0 Jul 11 09:04 memory.pressure
drwxr-xr-x 2 root root 0 Jul 10 20:50 ns

All Containers are affected. My envireonment is:

Code:
root@newton:~# pveversion -v                                                                                                                                                                 
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-15 (running version: 7.4-15/a5d2a31e)
pve-kernel-5.15: 7.4-4
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

I am relativ new in using proxmox. So I have no idea what's running wrong.

best regards Bernhard
 
Code:
root@newton:~# cat /proc/mounts |grep cgroup
cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,relatime,name=systemd 0 0

Code:
newton:~# cat /etc/pve/nodes/newton/lxc/101.conf
arch: amd64
cores: 1
features: fuse=1,nesting=1
hostname: behemoth
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.197.1,hwaddr=22:8A:D4:57:1D:87,ip=192.168.197.4/26,type=veth
onboot: 1
ostype: debian
rootfs: local:101/vm-101-disk-0.raw,size=8G
swap: 2048

Code:
ot@newton:~# cat /var/lib/lxc/101/config
lxc.cgroup.relative = 0
lxc.cgroup.dir.monitor = lxc.monitor/101
lxc.cgroup.dir.container = lxc/101
lxc.cgroup.dir.container.inner = ns
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.apparmor.raw = mount fstype=fuse,
lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file 0 0
lxc.monitor.unshare = 1
lxc.tty.max = 2
lxc.environment = TERM=linux
lxc.uts.name = behemoth
lxc.cgroup2.memory.max = 2147483648
lxc.cgroup2.memory.high = 2130706432
lxc.cgroup2.memory.swap.max = 2147483648
lxc.rootfs.path = /var/lib/lxc/101/rootfs
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth101i0
lxc.net.0.hwaddr = 22:8A:D4:57:1D:87
lxc.net.0.name = eth0
lxc.net.0.script.up = /usr/share/lxc/lxcnetaddbr
lxc.cgroup2.cpuset.cpus =

Thx & best regards
Bernhard
 
Code:
root@newton:~# cat /proc/mounts |grep cgroup
cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,relatime,name=systemd 0 0
Hmm, the second one is not present on my test system. It's present on another test system booted with systemd.unified_cgroup_hierarchy=0 but that one has all other legacy cgroup controllers too. Please share the kernel commandline cat /proc/cmdline.
 
Code:
root@newton:~# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-5.15.108-1-pve root=UUID=4d457d67-67ad-498d-be5f-66bfd91aee12 ro quiet intel_iommu=on
 
After digging around I found this threat: https://forum.proxmox.com/threads/cant-start-containers-after-proxmox-restart.123866/

And i have the same question as "randomuser1990":

"I still don't understand why it worked for the first 2 weeks, during multiple restarts and everything.

Adding: systemd.unified_cgroup_hierarchy=0 to the grub works.

But would love to understand what changed to make it not work anymore. Really appreciate the time your taking to reply here btw"

But now it the containers are working.

Thx and best regards
Bernhard
 
And i have the same question as "randomuser1990":

"I still don't understand why it worked for the first 2 weeks, during multiple restarts and everything.

Adding: systemd.unified_cgroup_hierarchy=0 to the grub works.

But would love to understand what changed to make it not work anymore. Really appreciate the time your taking to reply here btw"
I don't know, would've needed to see the system during that time to diagnose it. If your container's systemd is old and requires legacy cgroups, not sure why it would work for some time without them.

But now it the containers are working.
Glad to hear :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!