[SOLVED] systemctl status networking.service failed

Dec 7, 2018
20
0
1
44
Hi all,

I'm running 4.15.18-9-pve and this is the network configuration:


root@pve03:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

4: enp4s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
5: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
8: vlan439@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP mode DEFAULT group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff



root@pve03:~# ip address

4: enp4s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
5: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
inet 10.10.32.3/20 brd 10.10.47.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::f6e9:d4ff:fea5:ea50/64 scope link
valid_lft forever preferred_lft forever
8: vlan439@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f4:e9:d4:a5:ea:50 brd ff:ff:ff:ff:ff:ff
inet 10.10.126.53/24 brd 10.10.126.255 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::f6e9:d4ff:fea5:ea50/64 scope link
valid_lft forever preferred_lft forever

root@pve03:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp4s0f0 inet manual

iface enp4s0f1 inet manual

auto bond0
iface bond0 inet manual
slaves enp4s0f0 enp4s0f1
bond_miimon 100
bond_mode active-backup

auto vmbr0
iface vmbr0 inet static
address 10.10.32.3
netmask 255.255.240.0
gateway 10.10.47.254
bridge_ports bond0
bridge_stp off
bridge_fd 0

auto vlan439
iface vlan439 inet manual
vlan_raw_device bond0

auto vmbr1
iface vmbr1 inet static
address 10.10.126.53
netmask 255.255.255.0
bridge_ports vlan439
bridge_stp off
bridge_fd 0
network 10.10.126.0
post-up ip route add table vlan439 default dev vmbr1
post-up ip rule add from 10.10.126.0/24 table vlan439
post-down ip route del table vlan439 default dev vmbr1
post-down ip rule del from 10.10.126.0/24 table vlan439
root@pve03:~#


here I read something suspicious

root@pve03:~# systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2018-12-10 10:51:57 CET; 3h 31min ago
Docs: man:interfaces(5)
Process: 2397 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 2384 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS)
Main PID: 2397 (code=exited, status=1/FAILURE)
CPU: 75ms

Dec 10 10:51:57 pve03 systemd[1]: Starting Raise network interfaces...
Dec 10 10:51:57 pve03 ifup[2397]: Waiting for vmbr1 to get ready (MAXWAIT is 2 seconds).
Dec 10 10:51:57 pve03 ifup[2397]: RTNETLINK answers: File exists
Dec 10 10:51:57 pve03 ifup[2397]: ifup: failed to bring up vmbr1
Dec 10 10:51:57 pve03 systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 10:51:57 pve03 systemd[1]: Failed to start Raise network interfaces.
Dec 10 10:51:57 pve03 systemd[1]: networking.service: Unit entered failed state.
Dec 10 10:51:57 pve03 systemd[1]: networking.service: Failed with result 'exit-code'.
root@pve03:~# ^C


but the network works just fine.. I'm connected to the server via SSH and it can ping google.com

root@pve03:~# ping google.com
PING google.com (216.58.205.46) 56(84) bytes of data.
64 bytes from mil04s24-in-f14.1e100.net (216.58.205.46): icmp_seq=1 ttl=55 time=7.18 ms
64 bytes from mil04s24-in-f14.1e100.net (216.58.205.46): icmp_seq=2 ttl=55 time=6.98 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 6.988/7.085/7.182/0.097 ms
root@pve03:~#


upload_2018-12-10_14-31-50.png

What's the reason behind that fail?
 
Last edited:
post-up ip route add table vlan439 default dev vmbr1
post-up ip rule add from 10.10.126.0/24 table vlan439
post-down ip route del table vlan439 default dev vmbr1
post-down ip rule del from 10.10.126.0/24 table vlan439
* why do you need the post-up/post-down lines here? (the cidrs for vmbr0 and vmbr1 are non-overlapping)
* i guess that the `address/netmask` lines in the vmbr1 stanza already add the route for 10.10.126.0/24, and the post-up/down lines provoke the `RTNETLINK answers: File exists` error (which is the reason, for the failed networking.service)
 
  • Like
Reactions: Stefano Scoppa
Hi Stoiko,

you're right.
we commented the mentioned lines and now the network service is up and running.

//from /etc/network/interfaces
auto vmbr1
iface vmbr1 inet static
address 10.10.126.53
#netmask 255.255.255.0
bridge_ports vlan439
bridge_stp off
bridge_fd 0
network 10.10.126.0
#post-up ip route add table vlan439 default dev vmbr1
#post-up ip rule add from 10.10.126.0/24 table vlan439
#post-down ip route del table vlan439 default dev vmbr1
#post-down ip rule del from 10.10.126.0/24 table vlan439
root@pve03:~#

root@pve03:~# systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2018-12-10 16:34:40 CET; 41min ago
Docs: man:interfaces(5)
Process: 1026 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS)
Process: 994 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS)
Main PID: 1026 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
Memory: 0B
CPU: 0
CGroup: /system.slice/networking.service

Dec 10 16:34:34 pve03 systemd[1]: Starting Raise network interfaces...
Dec 10 16:34:40 pve03 ifup[1026]: ifup: waiting for lock on /run/network/ifstate.vmbr0
Dec 10 16:34:40 pve03 ifup[1026]: ifup: waiting for lock on /run/network/ifstate.vmbr1
Dec 10 16:34:40 pve03 systemd[1]: Started Raise network interfaces.
root@pve03:~#
 
Glad it worked! If you like, it'd be nice if you set the Thread to SOLVED - that way other users know what to expect when seeing it!
Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!