"unregister_netdevice: waiting for lo to become free" after container shutdown

May 18, 2019
169
9
23
Los Angeles, CA USA
lrlaRfN.png


After an unsuccessful shutdown, a stop was issued:

Code:
lxc-stop: 101: commands_utils.c: lxc_cmd_sock_rcv_state: 72 Resource temporarily unavailable - Failed to receive message
command 'lxc-stop -n 101 --nokill --timeout 60' failed: exit code 1
TASK OK

Code:
# pveversion --verbose
proxmox-ve: 6.0-2 (running kernel: 5.0.21-4-pve)
pve-manager: 6.0-15 (running version: 6.0-15/52b91481)
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.0-9
pve-container: 3.0-14
pve-docs: 6.0-9
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-8
pve-firmware: 3.0-4
pve-ha-manager: 3.0-5
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-1
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2


Code:
# cat /proc/101/stack
[<0>] rescuer_thread+0x2c0/0x3a0
[<0>] kthread+0x120/0x140
[<0>] ret_from_fork+0x35/0x40
[<0>] 0xffffffffffffffff


Code:
# pct config 101
arch: amd64
cores: 2
description: _______
hostname: _______
memory: 2048
net0: name=eth0,bridge=vmbr1,firewall=1,gw=10.10.10.1,gw6=2607:____:____::1,hwaddr=____________,ip=10.10.10.101/24,ip6=2607:____:____::101/64,type=veth
onboot: 1
ostype: ubuntu
protection: 1
rootfs: lvmt_containers1-nvme1:vm-101-disk-0,size=31G
swap: 1024
unprivileged: 1

Only solution: reboot. But it causes the PVE Firewall (which I am not using, I have pfsense, so it should be disabled) to hang. 8 minutes and still going.
 
Last edited:

Alwin

Proxmox Staff Member
Aug 1, 2017
4,617
451
88
Did this resolve after the reboot?
 
May 18, 2019
169
9
23
Los Angeles, CA USA
So, it shows up after some time? Could be an hardware issue as well.

It is not a hardware issue because it is reproducible when you shutdown a container and it fails to shutdown, and a stop order is issued. The issue has occurred before with proxmox and LXC, it is documented in this forum and other LXC discussions elsewhere.
 

somebody2000

Member
Jul 17, 2020
1
0
6
31
yep - still there...

Code:
root@pve1:~# uname -a
Linux pve1 5.4.41-1-pve #1 SMP PVE 5.4.41-1 (Fri, 15 May 2020 15:06:08 +0200) x86_64 GNU/Linux
root@pve1:~#
Message from syslogd@pve1 at Jul 17 11:22:53 ...
kernel:[4417919.318633] unregister_netdevice: waiting for lo to become free. Usage count = 1

Message from syslogd@pve1 at Jul 17 11:23:54 ...
kernel:[4417980.021713] unregister_netdevice: waiting for lo to become free. Usage count = 1

Message from syslogd@pve1 at Jul 17 11:24:04 ...
kernel:[4417990.101528] unregister_netdevice: waiting for lo to become free. Usage count = 1
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,968
1,080
164
hmm - we could not deterministically reproduce this here - any specifics common to your containers (and environment):
for example:
* do all your containers/setups have ipv6 addresses?
* do you use the pve-firewall?
** any particular rulesets if yes
* any specific services inside the containers (i.e. which applications runs inside them)?

That would help us in trying to reproduce this.
Thanks!
 
  • Like
Reactions: Gaia
May 18, 2019
169
9
23
Los Angeles, CA USA
Yes to IPv6
No pve-firewall. PF Sense as a guest VM for firewalling.
Is the rulesets question applicable if I am not using pve-firewall?
Yes. Can't disclose them publicly, happy to share privately. But prior experience with this situation (not being able to disclose privately the name of the service running) brings Tom saying I should purchase support. So I will guess this is the end of the road for me, again.
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,968
1,080
164
Yes to IPv6
No pve-firewall. PF Sense as a guest VM for firewalling.
guess that can serve as a starting point - I'll try to reproduce it with a IPv6 setup[
- I assume the node and containers are dual-stack (ipv4+ipv6) and not ipv6-only?
- if yes - are both ipv4 and ipv6 configured on the same networkinterface in the container? (and if no do you have 2 interfaces connected to the same bridge)?
 
  • Like
Reactions: Gaia

ulvida

Member
Feb 11, 2020
8
4
8
57
Hello Stoiko, hello Gaia. Same problem here, or almost: a similar kernel message:

Code:
root@asado:~#
Message from syslogd@asado at Aug 10 13:15:36 ...
 kernel:[8278276.499346] unregister_netdevice: waiting for lo to become free. Usage count = 1

Message from syslogd@asado at Aug 10 13:15:46 ...
 kernel:[8278286.643117] unregister_netdevice: waiting for lo to become free. Usage count = 1

...
but not exactly in the same situation: it's not in the node after a failed shutdown of a container, but inside some of the running containers.

Answering the questions:
> * do all your containers/setups have ipv6 addresses?
yes, we manage IPv6 on double stack in all hosts and nodes,

> * do you use the pve-firewall?
yes we do.

** any particular rulesets if yes
Nothing very special, fw for IPv54 and IPv6

> * any specific services inside the containers (i.e. which applications runs inside them)?
The reported error is on a debian 9 with AlternC (LAMP hosting panel)

Our configuration:
Code:
root@guri:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-2-pve: 5.3.13-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,968
1,080
164
but not exactly in the same situation: it's not in the node after a failed shutdown of a container, but inside some of the running containers.

this sounds a bit odd - at least compared to the other users experiencing this.

In any case your system is outdated - please install the latest updates and see if that improves things
Thanks!
 
May 18, 2019
169
9
23
Los Angeles, CA USA
Still happening on

Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.1-13
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.0.0-13
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1

It happens everytime an LXC using IPv6 is shutdown.
 

mbosma

Member
Dec 3, 2018
95
10
13
28
Just wanted to note I'm having the same issue on pve 6.3-1:
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-3
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-3
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!