So I'm attempting to access a running container, and this is the second time I received the following error within a week time:
If I'd restart the container/node, everything works again as it should. But I have no idea why I'm randomly getting this. I can access some other lxc containers just fine and only a few containers are affected by it.
config of the container:
and pveversion:
I prefer not to restart it every time I can't access the container with pct.
Wanted to add that I can access its terminal directly from the containers console, just not using pct enter from the main node.
EDIT:
Apparently I have to reboot the entire node/server, as rebooting the container alone is not working.. as in, it's not rebooting. Even a node reboot through the proxmox GUI is simply getting "Stuck"... I am forced to reboot the entire server through the systems control system.
What the heck is causing this issue?
Code:
skyrider@skyrider:/$ sudo pct enter 200
lxc-attach: 200: cgroups/cgfsng.c: cgroup_attach_create_leaf: 2169 Too many references: cannot splice - Failed to send ".lxc/cgroup.procs" fds 9 and 14
lxc-attach: 200: conf.c: userns_exec_minimal: 5156 Too many references: cannot splice - Running function in new user namespace failed
lxc-attach: 200: cgroups/cgfsng.c: cgroup_attach_move_into_leaf: 2185 No data available - Failed to receive target cgroup fd
lxc-attach: 200: conf.c: userns_exec_minimal: 5194 No data available - Running parent function failed
lxc-attach: 200: attach.c: do_attach: 1237 No data available - Failed to receive lsm label fd
lxc-attach: 200: attach.c: do_attach: 1375 Failed to attach to container
If I'd restart the container/node, everything works again as it should. But I have no idea why I'm randomly getting this. I can access some other lxc containers just fine and only a few containers are affected by it.
config of the container:
Code:
cores: 4
features: nesting=1
hostname: newkirstin
memory: 10240
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.248.110.1,hwaddr=7A:F3:CC:F0:01:B4,ip=10.248.110.200/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-zfs:subvol-200-disk-0,size=50G
swap: 512
unprivileged: 1
and pveversion:
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-2-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-4
pve-kernel-helper: 7.2-4
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.3-1
proxmox-backup-file-restore: 2.2.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-10
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
I prefer not to restart it every time I can't access the container with pct.
Wanted to add that I can access its terminal directly from the containers console, just not using pct enter from the main node.
EDIT:
Apparently I have to reboot the entire node/server, as rebooting the container alone is not working.. as in, it's not rebooting. Even a node reboot through the proxmox GUI is simply getting "Stuck"... I am forced to reboot the entire server through the systems control system.
What the heck is causing this issue?
Last edited: