I am seeing similar slow startup of LXc containers to those reported in this thread and elsewhere, However unlike that report I'm not using docker.
I have a pretty "vanilla" installation of Proxmox 7.4.1, except for the fact that the bulk storage is on a BTRFS mirror. This machine currently hosts two LXC containers, one running Samba and Cockpit for file sharing, and the other running Plex.
Both of the LXC containers often exhibit very slow boot times, up to 6 minutes. During this time the CPU and disks are pretty much idle, and the whole OS in the container is unresponsive, so no console or terminal logins are possible, and no web services. Then the guest wakes up and runs as if nothing has happened.
I had guessed that some kind of timeout in the network setup might be the cause, and that's supported by the fact that the file server (for example) shows a "failed to start" error in the logs for interface eth0, but (bizarrely) once the container boots the network is working fine, at least as regards access via the IPv4 address.
I am guessing here, but could this be cause by IPv6 networking being enabled but failing to start, while the IPv4 side works ok? I must admit IPv6 is something I haven't got my head around at all, and it's quite possible that my home router is old enough to not fully understand IPv6 either.
If there's an easy way to just disable IPv6 and avoid the lengthy boot timeout that would certainly work for me for now. I guess I will have to get my head around IPv6 at some point, but I'm rather hoping that networks will be mostly self-configuring by then and the days of hand cranking Ip address configs will be long gone.
I have a pretty "vanilla" installation of Proxmox 7.4.1, except for the fact that the bulk storage is on a BTRFS mirror. This machine currently hosts two LXC containers, one running Samba and Cockpit for file sharing, and the other running Plex.
Both of the LXC containers often exhibit very slow boot times, up to 6 minutes. During this time the CPU and disks are pretty much idle, and the whole OS in the container is unresponsive, so no console or terminal logins are possible, and no web services. Then the guest wakes up and runs as if nothing has happened.
I had guessed that some kind of timeout in the network setup might be the cause, and that's supported by the fact that the file server (for example) shows a "failed to start" error in the logs for interface eth0, but (bizarrely) once the container boots the network is working fine, at least as regards access via the IPv4 address.
Code:
Service logs
April 15, 2023
5:51 PM
Failed to start Raise network interfaces.
systemd
5:51 PM
networking.service: Failed with result 'timeout'.
systemd
5:51 PM
networking.service: Killing process 89 (isc-timer) with signal SIGKILL.
systemd
5:51 PM
ifup: failed to bring up eth0
ifup
5:51 PM
Got signal Terminated, terminating...
ifup
5:51 PM
networking.service: Main process exited, code=exited, status=1/FAILURE
systemd
5:51 PM
networking.service: start operation timed out. Terminating.
systemd
5:51 PM
XMT: Solicit on eth0, interval 109230ms.
dhclient
5:51 PM
XMT: Solicit on eth0, interval 109230ms.
ifup
5:51 PM
XMT: | X-- Request rebind in +5400
ifup
I am guessing here, but could this be cause by IPv6 networking being enabled but failing to start, while the IPv4 side works ok? I must admit IPv6 is something I haven't got my head around at all, and it's quite possible that my home router is old enough to not fully understand IPv6 either.
If there's an easy way to just disable IPv6 and avoid the lengthy boot timeout that would certainly work for me for now. I guess I will have to get my head around IPv6 at some point, but I'm rather hoping that networks will be mostly self-configuring by then and the days of hand cranking Ip address configs will be long gone.