I seem to have gotten a specific LXC ID in a broken state.
There have been 2-3 containers on this ID, at some point removed, then a new one cloned, which then has taken this free ID.
This has led to the following state, /var/log/syslog:
There's several dashed entries for LXC ID 107 in the cgroup folders, this appears to cause the issue.
find /sys/fs/cgroup/ -name "107-*"
pveversion:
At some point in time, the 107 container has had this in it's config, possibly related?
Any ideas how to fix this, can I (and should I) safely remove the "107-x" folders?
Note that this breaks more than just LXC 107. When 107 is turned on in this state, the overview of containers within Proxmox UI turns grey, all of them!
There have been 2-3 containers on this ID, at some point removed, then a new one cloned, which then has taken this free ID.
This has led to the following state, /var/log/syslog:
Jun 18 11:22:28 zoar pvestatd[31428]: lxc status update error: can't open '/sys/fs/cgroup/blkio/lxc/107/ns/blkio.throttle.io_service_bytes' - No such file or directory
Jun 18 11:22:28 zoar pvestatd[31428]: lxc console cleanup error: can't open '/sys/fs/cgroup/blkio/lxc/107/ns/blkio.throttle.io_service_bytes' - No such file or directory
There's several dashed entries for LXC ID 107 in the cgroup folders, this appears to cause the issue.
find /sys/fs/cgroup/ -name "107-*"
/sys/fs/cgroup/memory/lxc/107-3
/sys/fs/cgroup/memory/lxc/107-1
/sys/fs/cgroup/memory/lxc/107-2
/sys/fs/cgroup/perf_event/lxc/107-3
/sys/fs/cgroup/perf_event/lxc/107-1
/sys/fs/cgroup/perf_event/lxc/107-2
/sys/fs/cgroup/rdma/lxc/107-3
/sys/fs/cgroup/rdma/lxc/107-1
/sys/fs/cgroup/rdma/lxc/107-2
/sys/fs/cgroup/cpuset/lxc/107-3
/sys/fs/cgroup/cpuset/lxc/107-1
/sys/fs/cgroup/cpuset/lxc/107-2
/sys/fs/cgroup/cpu,cpuacct/lxc/107-3
/sys/fs/cgroup/cpu,cpuacct/lxc/107-1
/sys/fs/cgroup/cpu,cpuacct/lxc/107-2
/sys/fs/cgroup/blkio/lxc/107-3
/sys/fs/cgroup/blkio/lxc/107-1
/sys/fs/cgroup/blkio/lxc/107-2
/sys/fs/cgroup/pids/lxc/107-3
/sys/fs/cgroup/pids/lxc/107-1
/sys/fs/cgroup/pids/lxc/107-2
/sys/fs/cgroup/hugetlb/lxc/107-3
/sys/fs/cgroup/hugetlb/lxc/107-2
/sys/fs/cgroup/memory/lxc/107-1
/sys/fs/cgroup/memory/lxc/107-2
/sys/fs/cgroup/perf_event/lxc/107-3
/sys/fs/cgroup/perf_event/lxc/107-1
/sys/fs/cgroup/perf_event/lxc/107-2
/sys/fs/cgroup/rdma/lxc/107-3
/sys/fs/cgroup/rdma/lxc/107-1
/sys/fs/cgroup/rdma/lxc/107-2
/sys/fs/cgroup/cpuset/lxc/107-3
/sys/fs/cgroup/cpuset/lxc/107-1
/sys/fs/cgroup/cpuset/lxc/107-2
/sys/fs/cgroup/cpu,cpuacct/lxc/107-3
/sys/fs/cgroup/cpu,cpuacct/lxc/107-1
/sys/fs/cgroup/cpu,cpuacct/lxc/107-2
/sys/fs/cgroup/blkio/lxc/107-3
/sys/fs/cgroup/blkio/lxc/107-1
/sys/fs/cgroup/blkio/lxc/107-2
/sys/fs/cgroup/pids/lxc/107-3
/sys/fs/cgroup/pids/lxc/107-1
/sys/fs/cgroup/pids/lxc/107-2
/sys/fs/cgroup/hugetlb/lxc/107-3
/sys/fs/cgroup/hugetlb/lxc/107-2
pveversion:
proxmox-ve: 5.4-1 (running kernel: 4.15.18-15-pve)
pve-manager: 5.4-6 (running version: 5.4-6/aa7856c5)
pve-kernel-4.15: 5.4-3
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-10
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-52
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-43
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-52
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
pve-manager: 5.4-6 (running version: 5.4-6/aa7856c5)
pve-kernel-4.15: 5.4-3
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-10
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-52
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-43
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-52
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
At some point in time, the 107 container has had this in it's config, possibly related?
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.mount.auto: proc:rw
Any ideas how to fix this, can I (and should I) safely remove the "107-x" folders?
Note that this breaks more than just LXC 107. When 107 is turned on in this state, the overview of containers within Proxmox UI turns grey, all of them!
Last edited: