[SOLVED] Various (interrelated?) LXC errors

limone

Well-Known Member
Aug 1, 2017
89
8
48
29
Hi,

My setup has been running for almost a year without any problems, but all of a sudden lxc goes crazy.

proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1

I know, I still use version 6, an upgrade is still pending. But in this state I would not like to upgrade it.


How do these problems manifest themselves:

  1. I cannot connect to a container via "pct enter CTID" to an unprivileged container, however via the web interface the console works or with privileged containers it works.
    Bash:
    pct enter 100lxc-attach: 100: cgroups/cgfsng.c: cgroup_attach_create_leaf: 2290 Too many references: cannot splice - Failed to send ".lxc/cgroup.procs" fds 5 and 7
    lxc-attach: 100: conf.c: userns_exec_minimal: 4215 Too many references: cannot splice - Running function in new user namespace failed
    lxc-attach: 100: cgroups/cgfsng.c: cgroup_attach_move_into_leaf: 2306 No such file or directory - Failed to receive target cgroup fd
    lxc-attach: 100: conf.c: userns_exec_minimal: 4256 No such file or directory - Running parent function failed

  2. I cannot create an unprivileged container, well I can create it, but not start it, privileged works tho.
    It seems like this is not related to the guest OS, I tried ubuntu 20.04 and alpine 3.15
    Bash:
    __safe_mount_beneath_at: 1106 Function not implemented - Failed to open 51(dev)__safe_mount_beneath_at: 1106 Function not implemented - Failed to open 54(full)
    __safe_mount_beneath_at: 1106 Function not implemented - Failed to open 54(null)
    __safe_mount_beneath_at: 1106 Function not implemented - Failed to open 54(random)
    __safe_mount_beneath_at: 1106 Function not implemented - Failed to open 54(tty)
    __safe_mount_beneath_at: 1106 Function not implemented - Failed to open 54(urandom)
    __safe_mount_beneath_at: 1106 Function not implemented - Failed to open 54(zero)
    lxc_setup_devpts_child: 1571 Too many references: cannot splice - Failed to send devpts fd to parent
    lxc_setup: 3427 Failed to setup new devpts instance
    do_start: 1218 Failed to setup container "103"
    __sync_wait: 36 An error occurred in another process (expected sequence number 5)
    __lxc_start: 1999 Failed to spawn container "103"
    TASK ERROR: startup for container '103' failed

  3. I am afraid to restart one of the running unprivileged containers.
I've not seen this or any related problem here yet :(
 
Last edited:
hi,

Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.4.162-1-pve: 5.4.162-2

My setup has been running for almost a year without any problems
have you tried rebooting the host?

you have kernel 5.4.157-1-pve running but 5.4.162-1-pve installed, so i assume you made some package upgrades (including pve-kernel) but forgot to reboot?
or are you using the older kernel on purpose?
 
hi,

Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.4.162-1-pve: 5.4.162-2


have you tried rebooting the host?

you have kernel 5.4.157-1-pve running but 5.4.162-1-pve installed, so i assume you made some package upgrades (including pve-kernel) but forgot to reboot?
or are you using the older kernel on purpose?
I did
Bash:
apt update && apt dist-upgrade
to see if it resolves the problem, but it didn't, maybe thats the cause for the different kernels?
The last reboot was approx. 1 month ago, I have not run the server for a year without rebooting, there have been a few reboots and upgrades in that time :D

I would prefer not to do a reboot since the system is in a remote location and I don't have KVM or other remote tools.
I know this is my problem, but was hoping it would work without rebooting.
 
I would prefer not to do a reboot since the system is in a remote location and I don't have KVM or other remote tools.
I know this is my problem, but was hoping it would work without rebooting.
you always need to reboot after kernel upgrades ;)
 
Usually I do, but in this case it's not the cause of the problem, because I upgraded it after the error occured.
please try rebooting and report back if the issue is still there...

might make sense to post your container configurations here as well: pct config CTID
 
please try rebooting and report back if the issue is still there...

might make sense to post your container configurations here as well: pct config CTID
Okay, then I have to find a date to go to the server, and then I should build a PiKVM as soon as possible

I think posting the container config is not helping here, as this affects all unprotected containers, and especially completely new created containers with default settings, so they just look like that:

Bash:
arch: amd64
cores: 2
hostname: test
memory: 256
net0: name=eth0,bridge=vmbr0,gw=***,hwaddr=***,ip=***,type=veth
onboot: 1
ostype: debian
rootfs: NVMe:vm-100-disk-0,size=5G
startup: order=1
swap: 0
unprivileged: 1
 
I think posting the container config is not helping here, as this affects all unprotected containers, and especially completely new created containers with default settings, so they just look like that:
yes the container config looks to be mostly default settings (besides the startup order, but that shouldn't affect this).
that would confirm my suspicion that you need to do a reboot after the kernel upgrade, since it also affects all containers on the host...

Okay, then I have to find a date to go to the server, and then I should build a PiKVM as soon as possible
good luck! don't hesitate to write back here if the issue persists after reboot

I know, I still use version 6, an upgrade is still pending. But in this state I would not like to upgrade it.

also FWIW, you should look into upgrading to PVE7 while you're at it [0]

[0]: https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
 
Last edited:
I did the restart today, now it's working again.
I could not upgrade to version 7 due to lack of time, maybe next week.

Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.166-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-13
pve-kernel-helper: 6.4-13
pve-kernel-5.4.166-1-pve: 5.4.166-1
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1
 
I did the restart today, now it's working again.
good to hear that!

Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.166-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-13
pve-kernel-helper: 6.4-13
pve-kernel-5.4.166-1-pve: 5.4.166-1

yes, looks fine (installed kernel is running)

I could not upgrade to version 7 due to lack of time, maybe next week.
okay, just follow the linked instructions from the above post.

and please mark this thread as [SOLVED] :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!