Issues with networking and not adding routes

damo2929

Member
Mar 15, 2022
106
15
23
44
Hi All
am having an issue with the network-scripts attempting to add the default gateway to an interface before it's ready
Oct 17 09:05:17 wlsc-pxmh01 networking[2802]: error: Management: cmd '/bin/ip route add default via 10.199.11.254 proto kernel dev Management onlink' failed: returned 2 (Error: Nexthop device is not up.

the links are up

root@wlsc-pxmh01:~# ip r
10.199.11.0/24 dev Management proto kernel scope link src 10.199.11.1
10.199.12.0/24 dev Migration proto kernel scope link src 10.199.12.1
10.199.13.0/24 dev CephFrontend proto kernel scope link src 10.199.13.1
10.199.14.0/24 dev CephBackend proto kernel scope link src 10.199.14.1
10.199.15.0/24 dev ClusMan proto kernel scope link src 10.199.15.1

root@wlsc-pxmh01:~# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 70:b5:e8:d0:c4:68 brd ff:ff:ff:ff:ff:ff
altname enp225s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 70:b5:e8:d0:c4:69 brd ff:ff:ff:ff:ff:ff
altname enp225s0f1
4: enp161s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
5: enp161s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff permaddr 40:a6:b7:96:ba:39
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
7: Management@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
8: CephBackend@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
9: CephFrontend@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
10: Migration@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
11: ClusMan@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
12: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
13: vmbr0.420@vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master infra state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
14: infra: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
alias stuff that powers the cloud
15: vmbr0.421@vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master scicom state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
16: scicom: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 40:a6:b7:96:ba:38 brd ff:ff:ff:ff:ff:ff
alias Science computing machines


# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp161s0f0
iface enp161s0f0 inet manual
mtu 9000

auto enp161s0f1
iface enp161s0f1 inet manual
mtu 9000

iface eno1 inet manual

iface eno2 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp161s0f0 enp161s0f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
mtu 9000
bond-lacp-rate 1
#uplink bond

auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
mtu 9000
#Default bond

auto Management
iface Management inet static
address 10.199.11.1/24
gateway 10.199.11.254
mtu 1500
vlan-id 411
vlan-raw-device bond0
#Cluster Management Interface

auto CephBackend
iface CephBackend inet static
address 10.199.14.1/24
mtu 9000
vlan-id 414
vlan-raw-device bond0
#Ceph Stroage Devices

auto CephFrontend
iface CephFrontend inet static
address 10.199.13.1/24
mtu 9000
vlan-id 413
vlan-raw-device bond0
#Ceph Storage access network

auto Migration
iface Migration inet static
address 10.199.12.1/24
mtu 9000
vlan-id 412
vlan-raw-device bond0
#Virtual Machine Migration Network

auto ClusMan
iface ClusMan inet static
address 10.199.15.1/24
vlan-id 415
vlan-raw-device bond0
#Cluster internal management interfaces (corosync)

source /etc/network/interfaces.d/*


Any Ideas to the cause?
 
The outputs appear to be from Proxmox host.
There is no network-scripts in Debian/Proxmox. This was a necessary part of RH4+ < RH8, since RH8 it's deprecated and has to be installed separately.

Yet, you're right. The interfaces implies Debian/Proxmox.
 
I have found the issue to be related to the bridge, destroying the process.
it seems to hang all the VLANs on the bond when that's present,
this also appears to be a bug in the upstream Debian package
as the python script seems to be trying to work out the order but fails to work it out correctly.
it's also present in other debian-based distros so I will log a bug upstream.

meantime I have all the interfaces going via the bridge even though it will add latency to my 100gbit links.
 
I have found the issue to be related to the bridge, destroying the process.
it seems to hang all the VLANs on the bond when that's present,
this also appears to be a bug in the upstream Debian package
as the python script seems to be trying to work out the order but fails to work it out correctly.
it's also present in other debian-based distros so I will log a bug upstream.

meantime I have all the interfaces going via the bridge even though it will add latency to my 100gbit links.
The same problem here
@spirit when I was changing network setup according to your recommendation for SDN almost exactly like here by moving vlan from vmbr0 to bond0 I have the same issue with gateway.
 
Hi
I ran in to the same problem today on a new installation, with a bridge and vlan-interfaces (for management and storage) on top of a LACP-Bond.
Moving the vlan interfaces on top of the (not vlan aware) bridge solved the issue for now, but I'm worried it will break with some SDN configurations or other changes to the bridge.
Has anyone found a way around this issue, while leaving the vlan interfaces on the underlying interface?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!