[SOLVED] Routing issue after reboot

homerbrew

New Member
Oct 19, 2023
6
1
3
I recently rebooted my server (upgraded to proxmox 8.0.4) and after the reboot, the networking is not quite right. I can log into the host no problems, my VM works without issue, but the host cannot seem to route anything out.
Code:
# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 192.168.7.154/24
    gateway 192.168.7.254
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0
    bridge_ageing 0

# ip a


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
      inet6 ::1/128 scope host noprefixroute
      valid_lft forever preferred_lft forever
 2: enp3s0: <NO-CARRIER,BROADCAST,MULTICAST,DYNAMIC,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
      link/ether a8:a1:59:4a:c7:2c brd ff:ff:ff:ff:ff:ff
 3: eno1: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
      link/ether a8:a1:59:4a:c7:2b brd ff:ff:ff:ff:ff:ff
      altname enp0s31f6
      inet 169.254.101.6/16 brd 169.254.255.255 scope global eno1
      valid_lft forever preferred_lft forever
      inet6 fe80::aaa1:59ff:fe4a:c72b/64 scope link
      valid_lft forever preferred_lft forever
 4: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
      link/ether 3c:58:c2:2e:28:54 brd ff:ff:ff:ff:ff:ff
 5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
      link/ether a8:a1:59:4a:c7:2b brd ff:ff:ff:ff:ff:ff
      inet 192.168.7.154/24 scope global vmbr0
      valid_lft forever preferred_lft forever
      inet6 fe80::aaa1:59ff:fe4a:c72b/64 scope link
      valid_lft forever preferred_lft forever
 6: tap100i0: <BROADCAST,MULTICAST,PROMISC,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr100i0 state UNKNOWN group default qlen 1000
       link/ether a2:d7:ff:ff:60:3e brd ff:ff:ff:ff:ff:ff
       inet 169.254.125.96/16 brd 169.254.255.255 scope global tap100i0
      valid_lft forever preferred_lft forever
 7: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
      link/ether 9e:ad:89:c7:a4:8f brd ff:ff:ff:ff:ff:ff
 8: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
      link/ether 96:58:8c:14:7b:9b brd ff:ff:ff:ff:ff:ff
      inet 169.254.39.180/16 brd 169.254.255.255 scope global fwpr100p0
      valid_lft forever preferred_lft forever
 9: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
     link/ether ea:ed:93:2c:47:c1 brd ff:ff:ff:ff:ff:ff
     inet 169.254.116.135/16 brd 169.254.255.255 scope global fwln100i0
     valid_lft forever preferred_lft forever
# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-15-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-6
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2: 6.2.16-15
proxmox-kernel-6.2.16-14-pve: 6.2.16-14
pve-kernel-5.15.116-1-pve: 5.15.116-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.26-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.9
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.3-1
proxmox-backup-file-restore: 3.0.3-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.9
pve-cluster: 8.0.4
pve-container: 5.0.4
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.7
pve-qemu-kvm: 8.0.2-6
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.13-pve1

# nslookup google.com
;; communications error to 1.1.1.1#53: timed out
;; communications error to 1.1.1.1#53: timed out
;; communications error to 1.1.1.1#53: timed out
;; communications error to 8.8.8.8#53: timed out
;; no servers could be reached
# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 169.254.39.180 icmp_seq=1 Destination Host Unreachable
From 169.254.39.180 icmp_seq=2 Destination Host Unreachable
From 169.254.39.180 icmp_seq=3 Destination Host Unreachable
# ip route
0.0.0.0 dev fwpr100p0 scope link
0.0.0.0 dev tap100i0 scope link
0.0.0.0 dev fwln100i0 scope link
0.0.0.0 dev eno1 scope link
default dev fwpr100p0 scope link
default dev tap100i0 scope link
default dev eno1 scope link
default via 192.168.7.254 dev vmbr0 proto kernel onlink
169.254.0.0/16 dev eno1 proto kernel scope link src 169.254.101.6
169.254.0.0/16 dev fwln100i0 proto kernel scope link src 169.254.116.135
169.254.0.0/16 dev tap100i0 proto kernel scope link src 169.254.125.96
169.254.0.0/16 dev fwpr100p0 proto kernel scope link src 169.254.39.180
192.168.7.0/24 dev vmbr0 proto kernel scope link src 192.168.7.154

From a VM running under proxmox:
<SpamFilter:~>$nslookup google.com
Server:        1.1.1.1
Address:    1.1.1.1#53

Non-authoritative answer:
Name:    google.com
Address: 172.217.12.142

<SpamFilter:~>$ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=117 time=16.099 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=16.390 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=16.148 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 16.099/16.212/16.390/0.127 ms


What is preventing the host from reaching the internet, I just can't seem to figure it out. The firewalls are all off so that is not the issue.
 
Last edited:
Hi,

Is this the gateway `192.168.7.254` IP?

# ip route
0.0.0.0 dev fwpr100p0 scope link
0.0.0.0 dev tap100i0 scope link
0.0.0.0 dev fwln100i0 scope link
0.0.0.0 dev eno1 scope link
I would manually clean and re-add the route using `ip` tool:

Bash:
ip route del default
ip route add default via 192.168.7.254 dev vmbr0
 
Hi,

Is this the gateway `192.168.7.254` IP?


I would manually clean and re-add the route using `ip` tool:

Bash:
ip route del default
ip route add default via 192.168.7.254 dev vmbr0
That didn't work, it kept returning:
RTNETLINK answers: File exists
but when I searched on that, someone mentioned using replace instead of add, so I tried
ip route replace default via 192.168.7.254 dev vmbr0
and that worked! No idea why replacing my gateway with my gateway would fix it, unless the data was cashed somewhere else with corrupt info.

Thanks for leading me to the answer.
 
  • Like
Reactions: Moayad
After fighting with this for a while, I finally found the reason my gateway was getting borked after every reboot, it was a service called connman causing my issues. There are a bunch of posts about upgrading from earlier Debians to the latest and it causing the /etc/network/interface file to be ignored and basically causing me to re-initialize my default gateway after every reboot. I had to re-install network-manager (apt install network-manager) then removed connman (apt remove connman), then after a reboot, my gateway remained in place and all my networking issues were gone. Hopefully this might help someone in the future.
 
proxmox don't have network-manager or connman installed by default. It's use ifupdown2.
I believe connman got installed, following during the OS upgrade following these steps https://pve.proxmox.com/wiki/Upgrade_from_7_to_8 . Other's in a similar situation also had upgraded using the same or similar steps, so it is due to the upgrade and removing it fixes the networking issues I ran into as well as allowing me to run bind9 which was failing also due to connman binding to that port.
 
I believe connman got installed, following during the OS upgrade following these steps https://pve.proxmox.com/wiki/Upgrade_from_7_to_8 . Other's in a similar situation also had upgraded using the same or similar steps, so it is due to the upgrade and removing it fixes the networking issues I ran into as well as allowing me to run bind9 which was failing also due to connman binding to that port.
They are really 0 dependency or install of connman or network-manager package, until you have installed something manually on proxmox yourself with a depend on it.
 
They are really 0 dependency or install of connman or network-manager package, until you have installed something manually on proxmox yourself with a depend on it.
There was something during the upgrade that installed it, I never even heard of it before, so there is 0 chance I installed it on purpose. Network-manager was something I installed manually due to some other posts I had read. As you state, I am sure I don't need it, since the ifupdown2 will control the networking I need. Either way, Others have been affected and will be affected by this doing the upgrade which is why I posted this.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!