systemd-networkd.service disabled since I updated

stsinc

Member
Apr 15, 2021
66
0
11
Hi all,

I have updated my 3 Proxmox machines/nodes yesterday and I am now unable to perform basic network operations.
I have checked the logs and it appears that systemd-networkd.service is disabled at startup and won't start any longer.

The exact message I get when I do systemctl status systemd-networkd.service is:

Code:
systemd-networkd.service - Network Service
   Loaded: loaded (/lib/systemd/system/systemd-networkd.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:systemd-networkd.service(8)

If I then manually do a systemctl start systemd-networkd the service starts up indeed but only the IPv6 interface is up, NOT the IPv4 interface.
This what I get when I do an ip addr :
Code:
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 6c:4b:90:c6:8c:01 brd ff:ff:ff:ff:ff:ff

For the record, I haven't changed anything to my /etc/network/interfaces file:
Code:
auto lo
iface lo inet loopback
iface enp1s0f0 inet manual
        mtu 1500
auto vmbr0
iface vmbr0 inet static
        address 192.168.1.69/24
        gateway 192.168.1.1
        bridge-ports enp1s0f0
        bridge-stp off
        bridge-fd 0
        mtu 1500

Please help as these three machines are used in production.
Thanks. Best,
Stephen
 
Last edited:
Proxmox does not use systemd-networkd at all! So I guess your problem is somewhere else ...
 
Thank you Dietmar for your quick response.
Sorry for the misconception on my part about systemd.
How can I troubleshoot the network as it still does not mount the IPv4 part of the ethernet connection in any of my nodes?

As a result, the LXC containers embedded within each node (Turnkey Core LXC containers) do not have IPv4 connectivity any longer.
Here is an example of an ip addr in one of the embedded containers:
Code:
eth0@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 16:4b:5e:68:c3:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
 
Last edited:
OK, thanks to your previous remark, I have done some research and found the culprit [hint: it was NOT Proxmox].
Yesterday I switched the default version of Python from 2.7 to 3.7 on all of my machines.
As it turns out, version 3.7 disables IPv4 networking in Turnkey Core LXC containers.
As soon as I reverted back to Python 2.7 as the default version in each of the containers, I got IPv4 connectivity back in each of them.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!