[SOLVED] LXC container service only up after 5 minutes

degudejung

Member
Jun 17, 2021
13
0
6
24
Hi,
I am witnessing an issue with LXC containers in PE 7.2 with different containers, based on different templates (Debian 11 and Ubuntu 22.04) and now even different nodes. When I set up a container and install Docker with Portainer or AdGuard (which then would be the only service for the container), that container would start up just fine BUT the service would only show up as enabled and active after 5 minutes or pretty precisely 300 seconds later than the container uptime. Within the first minutes
Code:
service AdGuardHome status
...would simply return "loaded" and "inactive (dead)".When I try to command
Code:
service AdGuardHome start
...the terminal would not respond to that command before the 5 minutes are over. I can however perform pretty much any other command that I tried so far, so the container itself seems to be running OK.

It makes no difference whether the container is set to start on boot, or not.

I could not find anything like that in the forum. Any ideas?
 
Hello,

Since the container run normally on your PVE, it's not PVE or LXC issue, however, you can try to run the container using the below command and try to start the "AdGuardHome", to see if print any helpful error. Or you can run journalctl -f inside the container when you do service AdGuardHome start and see if it will print any helpful message.

Bash:
lxc-start -n <CTID> -F -l DEBUG
 
Hi, thanks for the hint. Indeed your recommended log revealed something - even though I don't see how that makes sense. journalctl -f shows:
Code:
Sep 09 15:13:02 ct-webmin systemd[1]: networking.service: start operation timed out. Terminating.
Sep 09 15:13:02 ct-webmin systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Sep 09 15:13:02 ct-webmin ifup[69]: Got signal Terminated, terminating...
Sep 09 15:13:02 ct-webmin ifup[69]: ifup: failed to bring up eth0
Sep 09 15:13:02 ct-webmin systemd[1]: networking.service: Failed with result 'timeout'.
Sep 09 15:13:02 ct-webmin systemd[1]: Failed to start Raise network interfaces.
Sep 09 15:13:02 ct-webmin systemd[1]: Reached target Network.
Sep 09 15:13:02 ct-webmin systemd[1]: Reached target Network is Online.
Sep 09 15:13:02 ct-webmin systemd[1]: Starting Samba NMB Daemon...
Sep 09 15:13:02 ct-webmin systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Sep 09 15:13:02 ct-webmin systemd[1]: Starting Permit User Sessions...
Sep 09 15:13:02 ct-webmin systemd[1]: Finished Permit User Sessions.
Sep 09 15:13:02 ct-webmin systemd[1]: Started Console Getty.
Sep 09 15:13:02 ct-webmin systemd[1]: Started Container Getty on /dev/tty1.
Sep 09 15:13:02 ct-webmin systemd[1]: Started Container Getty on /dev/tty2.
Sep 09 15:13:02 ct-webmin systemd[1]: Reached target Login Prompts.
Sep 09 15:13:02 ct-webmin systemd[1]: Started Samba NMB Daemon.
Sep 09 15:13:02 ct-webmin systemd[1]: Starting Samba SMB Daemon...
Sep 09 15:13:02 ct-webmin systemd[1]: Started Samba SMB Daemon.
Sep 09 15:13:02 ct-webmin postfix/postfix-script[1948]: starting the Postfix mail system
Sep 09 15:13:02 ct-webmin postfix/master[1950]: daemon started -- version 3.5.13, configuration /etc/postfix
Sep 09 15:13:02 ct-webmin systemd[1]: Started Postfix Mail Transport Agent (instance -).
Sep 09 15:13:02 ct-webmin systemd[1]: Starting Postfix Mail Transport Agent...
Sep 09 15:13:02 ct-webmin systemd[1]: Finished Postfix Mail Transport Agent.
Sep 09 15:13:02 ct-webmin systemd[1]: Reached target Multi-User System.
Sep 09 15:13:02 ct-webmin systemd[1]: Reached target Graphical Interface.
Sep 09 15:13:02 ct-webmin systemd[1]: Starting Update UTMP about System Runlevel Changes...
Sep 09 15:13:02 ct-webmin systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Sep 09 15:13:02 ct-webmin systemd[1]: Finished Update UTMP about System Runlevel Changes.
Sep 09 15:13:02 ct-webmin systemd[1]: Startup finished in 5min 1.068s.
Sep 09 15:13:12 ct-webmin systemd[1]: ssh@0-192.168.10.22:22-192.168.10.41:52786.service: Succeeded.
[this is not the AdGuardHome container but the issue is the same]

The error " ifup: failed to bring up eth0" makes no sense to me, since eth0 is the only network interface the container(s) has (have) but I actually use the interface. In the case of webmin, it does show the login screen in the browser when navigating to ip:port and I can provide credentials and hit login. It will then, however, take those 5 minutes before I can navigate through the webmin UI. Also, I can successfully ping it from another computer and initiate a ssh login (it fails despite correct credentials but that's probably another issue).

ip a shows
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 6e:21:eb:39:18:cb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.10.22/24 brd 192.168.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6c21:ebff:fe39:18cb/64 scope link
       valid_lft forever preferred_lft forever[/ICODE]


OK and I double-checked with the AdGuardHome container. The issue is 100% identical:
Code:
root@ct-adguard:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:37:8b:ca brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.10.33/24 brd 192.168.10.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe37:8bca/64 scope link
       valid_lft forever preferred_lft forever
root@ct-adguard:/# journalctl -f
-- Journal begins at Sun 2022-08-21 15:16:18 CEST. --
Sep 09 15:29:36 ct-adguard sshd[172]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.41  user=root
Sep 09 15:29:38 ct-adguard sshd[172]: Failed password for root from 192.168.10.41 port 52800 ssh2
Sep 09 15:29:53 ct-adguard sshd[172]: Connection closed by authenticating user root 192.168.10.41 port 52800 [preauth]
Sep 09 15:29:53 ct-adguard systemd[1]: ssh@0-192.168.10.33:22-192.168.10.41:52800.service: Succeeded.
Sep 09 15:30:33 ct-adguard ifup[162]: XMT: Forming Solicit, 126590 ms elapsed.
Sep 09 15:30:33 ct-adguard ifup[162]: XMT:  X-- IA_NA 00:37:8b:ca
Sep 09 15:30:33 ct-adguard ifup[162]: XMT:  | X-- Request renew in  +3600
Sep 09 15:30:33 ct-adguard ifup[162]: XMT:  | X-- Request rebind in +5400
Sep 09 15:30:33 ct-adguard ifup[162]: XMT: Solicit on eth0, interval 110330ms.
Sep 09 15:30:33 ct-adguard dhclient[162]: XMT: Solicit on eth0, interval 110330ms.
Sep 09 15:32:24 ct-adguard ifup[162]: XMT: Forming Solicit, 236920 ms elapsed.
Sep 09 15:32:24 ct-adguard ifup[162]: XMT:  X-- IA_NA 00:37:8b:ca
Sep 09 15:32:24 ct-adguard ifup[162]: XMT:  | X-- Request renew in  +3600
Sep 09 15:32:24 ct-adguard ifup[162]: XMT:  | X-- Request rebind in +5400
Sep 09 15:32:24 ct-adguard ifup[162]: XMT: Solicit on eth0, interval 121880ms.
Sep 09 15:32:24 ct-adguard dhclient[162]: XMT: Solicit on eth0, interval 121880ms.
Sep 09 15:33:24 ct-adguard systemd[1]: networking.service: start operation timed out. Terminating.
Sep 09 15:33:24 ct-adguard systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Sep 09 15:33:24 ct-adguard ifup[69]: Got signal Terminated, terminating...
Sep 09 15:33:24 ct-adguard ifup[69]: ifup: failed to bring up eth0
Sep 09 15:33:24 ct-adguard systemd[1]: networking.service: Failed with result 'timeout'.
Sep 09 15:33:24 ct-adguard systemd[1]: Failed to start Raise network interfaces.
Sep 09 15:33:24 ct-adguard systemd[1]: Reached target Network.
Sep 09 15:33:24 ct-adguard systemd[1]: Reached target Network is Online.
Sep 09 15:33:24 ct-adguard systemd[1]: Starting AdGuard Home: Network-level blocker...
Sep 09 15:33:24 ct-adguard systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Sep 09 15:33:24 ct-adguard systemd[1]: Starting Permit User Sessions...
Sep 09 15:33:24 ct-adguard systemd[1]: Finished Permit User Sessions.
Sep 09 15:33:24 ct-adguard systemd[1]: Started Console Getty.
Sep 09 15:33:24 ct-adguard systemd[1]: Started Container Getty on /dev/tty1.
Sep 09 15:33:24 ct-adguard systemd[1]: Started Container Getty on /dev/tty2.
Sep 09 15:33:24 ct-adguard systemd[1]: Reached target Login Prompts.
Sep 09 15:33:24 ct-adguard systemd[1]: Started AdGuard Home: Network-level blocker.
Sep 09 15:33:25 ct-adguard postfix/postfix-script[342]: starting the Postfix mail system
Sep 09 15:33:25 ct-adguard postfix/master[344]: daemon started -- version 3.5.13, configuration /etc/postfix
Sep 09 15:33:25 ct-adguard systemd[1]: Started Postfix Mail Transport Agent (instance -).
Sep 09 15:33:25 ct-adguard systemd[1]: Starting Postfix Mail Transport Agent...
Sep 09 15:33:25 ct-adguard systemd[1]: Finished Postfix Mail Transport Agent.
Sep 09 15:33:25 ct-adguard systemd[1]: Reached target Multi-User System.
Sep 09 15:33:25 ct-adguard systemd[1]: Reached target Graphical Interface.
Sep 09 15:33:25 ct-adguard systemd[1]: Starting Update UTMP about System Runlevel Changes...
Sep 09 15:33:25 ct-adguard systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Sep 09 15:33:25 ct-adguard systemd[1]: Finished Update UTMP about System Runlevel Changes.
Sep 09 15:33:25 ct-adguard systemd[1]: Startup finished in 5min 1.239s.
 
Last edited:
After chewing on this for a moment, it dawned on me...

I had IPv4 set to static BUT since I don't use IPv6 around here and I found now way to disable v6 for VMs/CTs, I just set it to DHCP. My DHCP however apparently does not hand out IPv6 addresses. So I could ping/ssh via IPv4 and eth0 was up but it was still waiting for the IPv6 address.

Now I also set IPv6 to static and left it blank. And whoosh, container and services up and running within 2 seconds...

thank you again, @Moayad, you nudged me in the right direction!
 
Hello,

This thread is the top google search for

"proxmox ct container 5 minutes sshd"

along with other similar thread https://forum.proxmox.com/threads/ssh-doesnt-work-as-expected-in-lxc.54691/

I have created this debian 12 CT container and it reliably takes 5 minutes before sshd starts, as you can observe in this screenshot

1726062521172.png

My router dhcp also has ipv6 disabled internally as my ISP does not support ipv6 either.


For anyone wondering how to turn off ipv6

you can edit your LXC.conf file such as

Code:
root@proxmox:~# cat  /etc/pve/nodes/proxmox/lxc/106.conf
arch: amd64
cmode: shell
cores: 16
features: nesting=1
hostname: gputest
memory: 12000
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:0F:20:7F,ip=dhcp,ip6=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-106-disk-0,size=64G
swap: 512


changing the line

Code:
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:0F:20:7F,ip=dhcp,ip6=dhcp,type=veth

to ( ipv6=dhcp to ipv6=manual or removing ,ipv6=dhcp entirely )

Code:
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:0F:20:7F,ip=dhcp,type=veth

On in the proxmox webui as follows, in the network sidetab

1726062880978.png


And that solves it

Now question for proxmox staff, how could we save future sysadmins from this google search when they operate a default CT container without ipv6 dhcp available ?

Could the timeout be reduced from 5 minutes to something more practical like 30 seconds ?

Could non-working ipv6 dhcp client be non-halting of system booting ? (Would services re-bind to ipv6 if ipv6 becomes available post-boot ? Surely yes, or else a temporary failure of the network could cause a permament failed until reboot of the services right ? )
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!