Proxmox, Rocky Linux 8, LXC, and systemd hang

seneca214

Active Member
Dec 3, 2012
26
3
43
Hello,

Looking for help troubleshooting an issue we've come across with Proxmox 7.2 - 7.4 hosts and Rocky OS 8 containers. The container will start and run normally, however, and this can be intermittent, when a systemctl command is used, such as systemctl start <service> the systemctl hangs indefinitely. Moreover, shutting the containers down will fail with an exit code of 1 due to the timeout being reached. They have to be stopped. This seems to be specific to Rocky OS 8 containers on Proxmox. We have a few Rocky Linux 8 KVM VMs that don't seem to have this issue. We have some CentOS 7 and Ubuntu containers that don't appear to have this issue either.

We're using the rockylinux-8-default... template from Proxmox as well as ZFS storage for each container's root disk.

Anyone else run into this with Rocky 8? Any ideas to help troubleshoot?

Thanks
 
also, i have a PVE version 7.4-3 installed and with a updated rocky8 container there is no problem
 
We believe the underlying issue here is the change made to allow CentOS 7 containers to work with Proxmox 7 or greater: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_security_considerations. Specifically, adding this kernel boot argument 'systemd.unified_cgroup_hierarchy=0' to the proxmox server. If we remove this boot argument, reboot the proxmox server and start up a Rocky 8 container, we no longer seem to have his hanging taking place.

Is it not possible to have CentOS 7 and Rocky OS 8 containers on the same Proxmox server?

Example server is:

pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 
For any others that may run into this. We believe we've found a solution that allows for both CentOS 7 and Rocky 8 containers on the same Proxmox host. For Rocky 8 containers add the following line to the /etc/pve/lcx/ID.conf file:

lxc.mount.auto = cgroup:rw:force

We found this hint from the Debian wiki here: https://wiki.debian.org/LXC/CGroupV2

We are still curious if others are able to run CentOS 7 and Rocky 8 containers on the same host without these changes.
 
Just in case it helps to discard something, I am running proxmox6 with Centos 7 and Rocky8 containers without a problem. I also have the setting in grub (systemd.unified_cgroup_hierarchy=0) in order to allow compatibility when upgrade to pve7 (maybe some day I will consider it stable enough!)