PVE 8.2.2 default route goes missing after reboot.

kellogs

Member
May 14, 2024
35
2
8
I can replicate this issue at will.


If i remove the linux bridge vmbr0 -> bond0 and reboot the default gateway on bond0.200 lives

if i added back the vmbr0 and rebooted the unit the default gateway disappear.
root@compute-200-23:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.0-1
proxmox-backup-file-restore: 3.2.0-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.1
pve-cluster: 8.0.6
pve-container: 5.0.10
pve-docs: 8.2.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.5
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2

root@compute-200-23:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp7s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:25:90:2f:68:fc brd ff:ff:ff:ff:ff:ff
3: enp7s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:25:90:2f:68:fd brd ff:ff:ff:ff:ff:ff
4: enp2s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether ac:1f:6b:2d:6b:e6 brd ff:ff:ff:ff:ff:ff
5: enp7s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:25:90:2f:68:fe brd ff:ff:ff:ff:ff:ff
6: enp7s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:25:90:2f:68:ff brd ff:ff:ff:ff:ff:ff
7: enp2s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether ac:1f:6b:2d:6b:e6 brd ff:ff:ff:ff:ff:ff permaddr ac:1f:6b:2d:6b:e7
8: enp130s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether ac:1f:6b:2d:62:2a brd ff:ff:ff:ff:ff:ff
9: enp130s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether ac:1f:6b:2d:62:2a brd ff:ff:ff:ff:ff:ff permaddr ac:1f:6b:2d:62:2b
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether ac:1f:6b:2d:6b:e6 brd ff:ff:ff:ff:ff:ff
11: bond0.200@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:2d:6b:e6 brd ff:ff:ff:ff:ff:ff
inet 172.16.200.23/24 scope global bond0.200
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe2d:6be6/64 scope link
valid_lft forever preferred_lft forever
12: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:2d:62:2a brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe2d:622a/64 scope link
valid_lft forever preferred_lft forever
13: bond1.202@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:2d:62:2a brd ff:ff:ff:ff:ff:ff
inet 172.16.202.23/24 scope global bond1.202
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe2d:622a/64 scope link
valid_lft forever preferred_lft forever
14: bond1.203@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:2d:62:2a brd ff:ff:ff:ff:ff:ff
inet 172.16.203.23/24 scope global bond1.203
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe2d:622a/64 scope link
valid_lft forever preferred_lft forever
15: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:2d:6b:e6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe2d:6be6/64 scope link
valid_lft forever preferred_lft forever

root@compute-200-23:~# ip route
172.16.200.0/24 dev bond0.200 proto kernel scope link src 172.16.200.23
172.16.202.0/24 dev bond1.202 proto kernel scope link src 172.16.202.23
172.16.203.0/24 dev bond1.203 proto kernel scope link src 172.16.203.23
root@compute-200-23:~#

root@compute-200-23:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp130s0f0 inet manual

iface enp7s0f0 inet manual

iface enp7s0f1 inet manual

iface enp7s0f2 inet manual

iface enp2s0f0 inet manual

iface enp7s0f3 inet manual

iface enp2s0f1 inet manual

iface enp130s0f1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves none
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
bond-ports enp2s0f0 enp2s0f1

auto bond1
iface bond1 inet manual
bond-slaves none
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
bond-ports enp130s0f0 enp130s0f1

auto bond0.200
iface bond0.200 inet static
address 172.16.200.23/24
gateway 172.16.200.1

auto bond1.202
iface bond1.202 inet static
address 172.16.202.23/24

auto bond1.203
iface bond1.203 inet static
address 172.16.203.23/24

auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

source /etc/network/interfaces.d/*
 
could you post the ifupdown2 debug logs from /var/log/ifupdown2 corresponding to the boot-time application of the network config? ideally, one for a boot where it doesn't work, one where it does..
 
Hello Fabian,

Could you kindly command to gather the debug logs and I would post the same.
 
1716427956736.png

default route missing
root@compute-200-23:/var/log/ifupdown2# ip route
172.16.200.0/24 dev bond0.200 proto kernel scope link src 172.16.200.23
172.16.202.0/24 dev bond1.202 proto kernel scope link src 172.16.202.23
172.16.203.0/24 dev bond1.203 proto kernel scope link src 172.16.203.23

removed the bond0 from GUI

1716428010293.png

rebooted the unit and route appear

root@compute-200-23:~# ip route
default via 172.16.200.1 dev bond0.200 proto kernel onlink
172.16.200.0/24 dev bond0.200 proto kernel scope link src 172.16.200.23
172.16.202.0/24 dev bond1.202 proto kernel scope link src 172.16.202.23
172.16.203.0/24 dev bond1.203 proto kernel scope link src 172.16.203.23

seems like the debug log in /var/log/ifupdown2 are rotated faster than i could find out which directory has the file.
 
i have managed to copy the oldest folder when the server rebooted after adding the vmbr0 and the default route when missing
 

Attachments

  • ifupdown2.debug.log
    31.7 KB · Views: 3
@kellogs your config seems wrong - your bonds don't have their "slave" devices configured?
 
@jkotecki I think your config is also wrong - you want vlan.XXX or specify the vlan-id .. if that is not the cause of your issues, please post

- ip a
- ip l
- ifreload -av

thanks!
 
@kellogs your config seems wrong - your bonds don't have their "slave" devices configured?

Hello Fabian,

I was unaware that we need to use bond-slaves instead of bond-ports but after your comment i went to read about this and it seems like bond-ports is meant for OVS and bond-slaves is for Linux Bridge.

Anyway I have changed the bond-ports and set it as bond-slaves such as following and unfortunately we would lost the default gateway once the unit is rebooted.

# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp130s0f0
iface enp130s0f0 inet manual

iface enp7s0f0 inet manual

iface enp7s0f1 inet manual

iface enp7s0f2 inet manual

auto enp2s0f0
iface enp2s0f0 inet manual

iface enp7s0f3 inet manual

auto enp2s0f1
iface enp2s0f1 inet manual

auto enp130s0f1
iface enp130s0f1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp2s0f0 enp2s0f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3

auto bond1
iface bond1 inet manual
bond-slaves enp130s0f0 enp130s0f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3

auto bond0.200
iface bond0.200 inet static
address 172.16.200.23/24
gateway 172.16.200.1

auto bond1.202
iface bond1.202 inet static
address 172.16.202.23/24

auto bond1.203
iface bond1.203 inet static
address 172.16.203.23/24

auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
 
see my first reply in this thread ;)
 
if you use vlan-aware bridge, you should use <vmbr.X> instead "bond0.X" for your vlans ip address.

Code:
auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

auto vmbr0.200
iface vmbr0.200 inet static
    address 172.16.200.23/24
    gateway 172.16.200.1

auto vmbr0.202
iface vmbr0.202 inet static
    address 172.16.202.23/24

auto vmbr0.203
iface vmbr0.203 inet static
    address 172.16.203.23/24


note that order of interface in /etc/network/interfaces can be important (bond before bridge, tagged bond after bridge,....), so its better to use the gui, to have them in correct order
 
if you use vlan-aware bridge, you should use <vmbr.X> instead "bond0.X" for your vlans ip address.

Code:
auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

auto vmbr0.200
iface vmbr0.200 inet static
    address 172.16.200.23/24
    gateway 172.16.200.1

auto vmbr0.202
iface vmbr0.202 inet static
    address 172.16.202.23/24

auto vmbr0.203
iface vmbr0.203 inet static
    address 172.16.203.23/24


note that order of interface in /etc/network/interfaces can be important (bond before bridge, tagged bond after bridge,....), so its better to use the gui, to have them in correct order
Actually no because those bond0.X and bond1.x is for Proxmox services only and not to be used by Proxmox VMs. I did use the GUI also.
 
Last edited:
issuing ifreload -a after rebooting managed to bring up the interface which is the same behaviour described in bugzilla bugid 5406. attached is a copy of ifupdown2 debug file after issuing ifreload -a
 

Attachments

  • ifupdown2.debug.log
    33.5 KB · Views: 0
Based on the comment from the bugzilla ID 5406 ... i have re arranged so that vmbrX is before the bondX.X and the default route survived rebooting.

Example

auto lo
iface lo inet loopback

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp130s0f0 inet manual

iface enp130s0f1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp2s0f0 enp2s0f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3

auto bond1
iface bond1 inet manual
bond-slaves enp130s0f0 enp130s0f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto bond0.200
iface bond0.200 inet static
address 172.16.200.23/24
gateway 172.16.200.1

auto bond1.202
iface bond1.202 inet static
address 172.16.202.23/24

auto bond1.203
iface bond1.203 inet static
address 172.16.203.23/24


source /etc/network/interfaces.d/*
 
be carefull that if you use vlan-aware vmbr0, and tag vlan on bond0.X directly, you can use the same vlan for vms, because the traffic will never reach vmbr0. (it'll be forced to go to bond0.x )
i setup vmbr0 and use SDN vlan tagged for all VMs traffic. bond0.X and bond1.X are for host and Ceph traffic only.
 
For use with VMs,
Under the interfaces config file, we would define vmbr0<tag> with tagged bond0.<tag> and *no ip nor gateway*

For use with pve management (i.e. port 8006), cluster, ceph (all on same 10g lacped bond0)
Under the interfaces config file, we would define ip and gateway to it.

To put an example

1716873243428.png

1716873368038.png1716873548773.png
And finally you may also add other local subnets (i.e. management desktop ips which is not under 10.x.x.x) by post-up ip route

1716873728577.png

In GUI, VM hardware network configuration, use vmbr0 and vlan id tag 132, 200 etc to enable the connectivity.

Cheers,
 
We're experiencing the same problem on latest 8.2.2.

Exception: cmd '/bin/ip route replace default via 172.30.41.1 proto kernel dev vlan1304 onlink' failed: returned 2 (Error: Nexthop device is not up.
I'll open a support case and attach the whole logs there (update: Ticket #4721214)


Basic networking:
We've set up the nodes with bond0 and a VLAN on it with the management IP as default gateway.
(Additional bond1+VLAN+IP for ceph and an RJ45+VLAN+IP for Corosync)
So there was no vmbr0 in the network config, only bonds and vlans. Everything worked fine, survived reboots.

Then adding SDN:
Tried to setup SDN, but it needs a vmbr, so i've added "vmbrsdn" on top of bond0 (=in addition to the management VLAN!)
Add a VLAN zone and 2 SDN VLANs, everything works fine for a few days and all kinds of benchmarking etc.


Reboot:
Only now after rebooting a node, suddenly havoc - default route/gateway is missing.
ifreload -a readds the route, but severe issues remain:
The cluster was FUBAR until all nodes were rebooted and "ifreload -a" executed. but all cluster/corosync/Ceph IPs are in their own range/vlan, so there should be no routing needed, meaning there should be no problem if the gateway is missing - ping etc. always worked.


Snippets from debug-log - vlan1304 on bond0 fails to apply route.

Code:
2024-05-30 11:01:22,950: MainThread: ifupdown.stateManager: statemanager.py:150:ifaceobj_sync(): debug: bond0: statemanager sync state pre-up

2024-05-30 11:01:22,950: MainThread: ifupdown: scheduler.py:161:run_iface_list_ops(): info: vlan1304: running ops ...

2024-05-30 11:01:22,950: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module xfrm

2024-05-30 11:01:22,950: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module link

2024-05-30 11:01:22,950: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module bond

2024-05-30 11:01:22,950: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module vlan

2024-05-30 11:01:22,950: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2993:link_add_vlan(): info: vlan1304: netlink: ip link add link bond0 name vlan1304 type vlan id 1304 protocol 802.1q bridge_binding off

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module vxlan

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module usercmds

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module bridge

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module bridgevlan

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module tunnel

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module vrf

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module ethtool

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module auto

2024-05-30 11:01:22,951: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: pre-up : running module address

2024-05-30 11:01:22,951: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /sbin/sysctl net.mpls.conf.vlan1304.input=0

2024-05-30 11:01:22,952: MainThread: ifupdown2.__Sysfs: io.py:42:write_to_file(): info: writing "1500" to file /sys/class/net/vlan1304/mtu

2024-05-30 11:01:22,953: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:3285:addr_add(): info: vlan1304: netlink: ip addr add 172.30.41.20/26 dev vlan1304

2024-05-30 11:01:22,953: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2606:link_up(): info: vlan1304: netlink: ip link set dev vlan1304 up

2024-05-30 11:01:22,953: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: up : running module dhcp

2024-05-30 11:01:22,953: MainThread: ifupdown: scheduler.py:105:run_iface_op(): debug: vlan1304: up : running module address

2024-05-30 11:01:22,953: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /bin/ip route replace default via 172.30.41.1 proto kernel dev vlan1304 onlink

2024-05-30 11:01:22,958: MainThread: ifupdown.address: modulebase.py:124:log_error(): debug:   File "/usr/sbin/ifup", line 135, in <module>

    sys.exit(main())

   File "/usr/sbin/ifup", line 123, in main

    return stand_alone()

   File "/usr/sbin/ifup", line 103, in stand_alone

    status = ifupdown2.main()

   File "/usr/share/ifupdown2/ifupdown/main.py", line 77, in main

    self.handlers.get(self.op)(self.args)

   File "/usr/share/ifupdown2/ifupdown/main.py", line 193, in run_up

    ifupdown_handle.up(['pre-up', 'up', 'post-up'],

   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 1843, in up

    ret = self._sched_ifaces(filtered_ifacenames, ops,

   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 1566, in _sched_ifaces

    ifaceScheduler.sched_ifaces(self, ifacenames, ops,

   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 595, in sched_ifaces

    cls.run_iface_list(ifupdownobj, run_queue, ops,

   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 325, in run_iface_list

    cls.run_iface_graph(ifupdownobj, ifacename, ops, parent,

   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 315, in run_iface_graph

    cls.run_iface_list_ops(ifupdownobj, ifaceobjs, ops)

   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 188, in run_iface_list_ops

    cls.run_iface_op(ifupdownobj, ifaceobj, op,

   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 106, in run_iface_op

    m.run(ifaceobj, op,

   File "/usr/share/ifupdown2/addons/address.py", line 1606, in run

    op_handler(self, ifaceobj,

   File "/usr/share/ifupdown2/addons/address.py", line 1231, in _up

    self._add_delete_gateway(ifaceobj, gateways, prev_gw)

   File "/usr/share/ifupdown2/addons/address.py", line 759, in _add_delete_gateway

    self.log_error('%s: %s' % (ifaceobj.name, str(e)))

   File "/usr/share/ifupdown2/ifupdownaddons/modulebase.py", line 121, in log_error

    stack = traceback.format_stack()

2024-05-30 11:01:22,958: MainThread: ifupdown.address: modulebase.py:125:log_error(): debug: Traceback (most recent call last):

  File "/usr/share/ifupdown2/addons/address.py", line 757, in _add_delete_gateway

    self.iproute2.route_add_gateway(ifaceobj.name, add_gw, vrf, metric, onlink=self.l3_intf_default_gateway_set_onlink)

  File "/usr/share/ifupdown2/lib/iproute2.py", line 881, in route_add_gateway

    utils.exec_command(cmd)

  File "/usr/share/ifupdown2/ifupdown/utils.py", line 414, in exec_command

    return cls._execute_subprocess(shlex.split(cmd),

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/usr/share/ifupdown2/ifupdown/utils.py", line 392, in _execute_subprocess

    raise Exception(cls._format_error(cmd,

Exception: cmd '/bin/ip route replace default via 172.30.41.1 proto kernel dev vlan1304 onlink' failed: returned 2 (Error: Nexthop device is not up.

)

2024-05-30 11:01:22,958: MainThread: ifupdown: scheduler.py:114:run_iface_op(): error: vlan1304: cmd '/bin/ip route replace default via 172.30.41.1 proto kernel dev vlan1304 onlink' failed: returned 2 (Error: Nexthop device is not up.

)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!