LXC client gets no network after client reboot

hagen

Member
Jun 26, 2022
9
0
6
At PVE 8.3.1 I have a LXC client with Ubuntu JAMMY. After a release upgrade to NOBLE, the IPv4 address is gone:
Bash:
root@hf-s01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0@if117: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff link-netnsid 0

I only get the network online again, if I completely remove the network of the container in PVE and recreate it.
Bash:
root@hf-s01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
112: eth0@if113: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.77.1/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd0c:2c8e:53da:0:b061:bff:fe75:29e2/64 scope global dynamic mngtmpaddr
       valid_lft 7018sec preferred_lft 3418sec
    inet6 2001:9e8:3da4:2700:b061:bff:fe75:29e2/64 scope global dynamic mngtmpaddr
       valid_lft 7018sec preferred_lft 3418sec
    inet6 fe80::b061:bff:fe75:29e2/64 scope link
       valid_lft forever preferred_lft forever

When I reboot the LXC container, the IPv4 is gone again.

The network service is up and running:
Bash:
root@hf-s01:~# systemctl status networking
● networking.service - Raise network interfaces
     Loaded: loaded (/usr/lib/systemd/system/networking.service; enabled; preset: enabled)
     Active: active (exited) since Sat 2024-12-21 09:20:07 UTC; 1s ago
       Docs: man:interfaces(5)
    Process: 260 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS)
    Process: 315 ExecStart=/bin/sh -c if [ -f /run/network/restart-hotplug ]; then /sbin/ifup -a --read-environment --allow=hotplug; fi (code=exited, status=0/SUCCESS)
   Main PID: 315 (code=exited, status=0/SUCCESS)
        CPU: 33ms

The network service is configured by PVE:
Bash:
root@hf-s01:~# cat /etc/systemd/network/eth0.network
[Match]
Name = eth0

[Network]
Description = Interface eth0 autoconfigured by PVE
Address = 192.168.77.1/24
Gateway = 192.168.77.253
DHCP = no
IPv6AcceptRA = false

The IPv4 routes are also gone:
Bash:
root@hf-s01:~# ip r
 
Last edited:
Bash:
root@hf-s01:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0@if124: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff link-netnsid 0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!