very slow LXC containers starts because of ' Failed to start Raise network interfaces. '

iceM0nger

New Member
Dec 18, 2022
1
0
1
hello fellow creators and users !!
I have strange problem after updating my proxMOX-VE server to 8.0. every containers 'existing and newly created' starting very slow...
when i start container he runs after 5 minutes of waiting .
sometimes when container starts gives this error in proxMOX Tasks History

failed waiting for client: timed out
TASK ERROR: command '/usr/bin/termproxy 5900 --path /vms/105 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole105 -r winch -z lxc-console -n 105 -t 0 -e -1' failed: exit code 1

# systemctl status networking.service
* networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Mon 2023-11-27 09:02:07 UTC; 30s ago
Docs: man:interfaces(5)
Process: 58 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Main PID: 58 (code=exited, status=1/FAILURE)
CPU: 83ms

Nov 27 09:01:06 NGiNX-nPm ifup[58]: XMT: | X-- Request rebind in +5400
Nov 27 09:01:06 NGiNX-nPm dhclient[144]: XMT: Solicit on eth0, interval 118760ms.
Nov 27 09:01:06 NGiNX-nPm ifup[58]: XMT: Solicit on eth0, interval 118760ms.
Nov 27 09:02:07 NGiNX-nPm systemd[1]: networking.service: Start operation timed out. Terminating.
Nov 27 09:02:07 NGiNX-nPm ifup[58]: Got signal Terminated, terminating...
Nov 27 09:02:07 NGiNX-nPm ifup[58]: ifup: failed to bring up eth0
Nov 27 09:02:07 NGiNX-nPm systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Nov 27 09:02:07 NGiNX-nPm systemd[1]: networking.service: Failed with result 'timeout'.
Nov 27 09:02:07 NGiNX-nPm systemd[1]: Failed to start Raise network interfaces.
Nov 27 09:02:07 NGiNX-nPm systemd[1]: networking.service: Consumed 83ms CPU time.
# journalctl -u networking.service
-- Logs begin at Mon 2023-11-27 08:57:07 UTC, end at Mon 2023-11-27 09:02:30 UTC. --
Nov 27 08:57:08 NGiNX-nPm dhclient[144]: Internet Systems Consortium DHCP Client 4.4.1
Nov 27 08:57:08 NGiNX-nPm ifup[58]: Internet Systems Consortium DHCP Client 4.4.1
Nov 27 08:57:08 NGiNX-nPm dhclient[144]: Copyright 2004-2018 Internet Systems Consortium.
Nov 27 08:57:08 NGiNX-nPm ifup[58]: Copyright 2004-2018 Internet Systems Consortium.
Nov 27 08:57:08 NGiNX-nPm dhclient[144]: All rights reserved.
Nov 27 08:57:08 NGiNX-nPm ifup[58]: All rights reserved.
Nov 27 08:57:08 NGiNX-nPm dhclient[144]: For info, please visit https://www.isc.org/software/dhcp/
Nov 27 08:57:08 NGiNX-nPm ifup[58]: For info, please visit https://www.isc.org/software/dhcp/
Nov 27 08:57:08 NGiNX-nPm dhclient[144]:
Nov 27 08:57:08 NGiNX-nPm dhclient[144]: Listening on Socket/eth0
Nov 27 08:57:08 NGiNX-nPm ifup[58]: Listening on Socket/eth0
Nov 27 08:57:08 NGiNX-nPm dhclient[144]: Sending on Socket/eth0
Nov 27 08:57:08 NGiNX-nPm ifup[58]: Sending on Socket/eth0
Nov 27 08:57:08 NGiNX-nPm ifup[58]: PRC: Soliciting for leases (INIT).
Nov 27 08:57:08 NGiNX-nPm ifup[58]: XMT: Forming Solicit, 0 ms elapsed.
Nov 27 08:57:08 NGiNX-nPm ifup[58]: XMT: X-- IA_NA 11:59:4a:d2
Nov 27 08:57:08 NGiNX-nPm ifup[58]: XMT: | X-- Request renew in +3600
Nov 27 08:57:08 NGiNX-nPm ifup[58]: XMT: | X-- Request rebind in +5400
Nov 27 08:57:08 NGiNX-nPm dhclient[144]: XMT: Solicit on eth0, interval 1050ms.
Nov 27 08:57:08 NGiNX-nPm ifup[58]: XMT: Solicit on eth0, interval 1050ms.
Nov 27 08:57:10 NGiNX-nPm ifup[58]: XMT: Forming Solicit, 1050 ms elapsed.
Nov 27 08:57:10 NGiNX-nPm ifup[58]: XMT: X-- IA_NA 11:59:4a:d2
Nov 27 08:57:10 NGiNX-nPm ifup[58]: XMT: | X-- Request renew in +3600
Nov 27 08:57:10 NGiNX-nPm ifup[58]: XMT: | X-- Request rebind in +5400
Nov 27 08:57:10 NGiNX-nPm dhclient[144]: XMT: Solicit on eth0, interval 2030ms.
Nov 27 08:57:10 NGiNX-nPm ifup[58]: XMT: Solicit on eth0, interval 2030ms.
Nov 27 08:57:12 NGiNX-nPm ifup[58]: XMT: Forming Solicit, 3080 ms elapsed.
Nov 27 08:57:12 NGiNX-nPm ifup[58]: XMT: X-- IA_NA 11:59:4a:d2
Nov 27 08:57:12 NGiNX-nPm ifup[58]: XMT: | X-- Request renew in +3600
Nov 27 08:57:12 NGiNX-nPm ifup[58]: XMT: | X-- Request rebind in +5400
Nov 27 08:57:12 NGiNX-nPm dhclient[144]: XMT: Solicit on eth0, interval 4000ms.
Nov 27 08:57:12 NGiNX-nPm ifup[58]: XMT: Solicit on eth0, interval 4000ms.
Nov 27 08:57:16 NGiNX-nPm ifup[58]: XMT: Forming Solicit, 7080 ms elapsed.
Nov 27 08:57:16 NGiNX-nPm ifup[58]: XMT: X-- IA_NA 11:59:4a:d2
Nov 27 08:57:16 NGiNX-nPm ifup[58]: XMT: | X-- Request renew in +3600
my container /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.10.1.5/24
gateway 10.10.1.1

iface eth0 inet6 dhcp
and # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether bc:24:11:59:4a:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.1.5/24 brd 10.10.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::be24:11ff:fe59:4ad2/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:e2:92:10:8a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e2ff:fe92:108a/64 scope link
valid_lft forever preferred_lft forever
4: br-7e270e85bec2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:46:c8:0d:d9 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-7e270e85bec2
valid_lft forever preferred_lft forever
inet6 fe80::42:46ff:fec8:dd9/64 scope link
valid_lft forever preferred_lft forever
6: vethb386fea@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 12:b2:de:bd:cd:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::10b2:deff:febd:cda8/64 scope link
valid_lft forever preferred_lft forever
8: vethe4fc57a@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7e270e85bec2 state UP group default
link/ether 96:6d:41:b7:cc:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::946d:41ff:feb7:ccf6/64 scope link
valid_lft forever preferred_lft forever

i cant see where exactly is the problem. Any help will be thankful.
 
Last edited:
I had the same problem and the culprit was the DHCP for IPv6. Although I have IPv6 at home it seems my router is not running a DHCPv6, but addresses are handed out using SLAAC.

If I were you I would try either disabled IPv6 or switching to SLAAC if you actually have IPv6 in your network.
 
proxMOX-VE
Every time I read something like that, it hurts. Please do not use these spellings, they are simply completely wrong.

Code:
The name Proxmox
When referring to the Proxmox name, the first letter must be capitalized followed by lowercase letters like
for example: Proxmox
When referring to one of the Proxmox products use Proxmox together with the product name like for
example ‘Proxmox Virtual Environment’ (in short: Proxmox VE) or ‘Proxmox Mail Gateway’.
When referring to the company you can either use the name ‘Proxmox’ or the full company name ‘Proxmox
Server Solutions GmbH’.
DO:
• Proxmox
• Proxmox Virtual Environment (or Proxmox VE)
• Proxmox Mail Gateway
• Proxmox Server Solutions GmbH
DON'T:
• don’t use all lower case (no: proxmox)
• don’t use all uppercase (no: PROXMOX)
• don’t mix upper- and lowercase in the middle of the name (never: ProxMox)
• don’t simply use our website URL in a sentence instead of the name Proxmox (no: “The company
proxmox.com is...’ if you want to say: ‘The company Proxmox is...’)

Source: https://www.proxmox.com/en/about/media-kit

If I understand correctly, then you have Docker running in an LXC container, right?

Basically, the recommendation is not to run Docker in a container but to run it in a VM. See also: https://pve.proxmox.com/wiki/Linux_Container
 
  • Like
Reactions: cheiss

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!