After reboot, bridging stops working

foxt

Member
Aug 31, 2020
10
0
6
United Kingdom
foxt.dev
Hi all, I recently installed Proxmox on my server, however when I use a Linux bridge to allow VMs to connect to the internet, the host machine does not get any internet. Inbound connections can be made (ssh, http console, etc), however outbound (ping, dns, etc) fail.
Code:
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
From 192.168.1.247 icmp_seq=7 Destination Host Unreachable

Removing the bridge and going back to regular network connection works just fine, at the cost of not allowing VMs to connect to the internet.

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether MY:MAC:ADDR brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.247/8 brd 192.255.255.255 scope global noprefixroute enp0s31f6
       valid_lft forever preferred_lft forever
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether MY:MAC:ADDR brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.247/8 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet 192.168.1.80/24 brd 192.168.1.255 scope global noprefixroute vmbr0
       valid_lft forever preferred_lft forever
    inet6 fdad:39f:e769::dfe/128 scope global noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fdad:39f:e769:0:1504:2fd4:9efb:ba2/64 scope global mngtmpaddr noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::cc17:91b2:1780:a021/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::4eed:fbff:fe66:3ab8/64 scope link
       valid_lft forever preferred_lft forever
4: br-333b0d95f295: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:aa:ab:93:3b brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-333b0d95f295
       valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:82:4c:8d:c3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet 169.254.83.180/16 brd 169.254.255.255 scope global noprefixroute docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::cf0:779c:4a47:2533/64 scope link
       valid_lft forever preferred_lft forever
11: veth065ee09@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ea:74:b9:77:bd:27 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 169.254.227.241/16 brd 169.254.255.255 scope global noprefixroute veth065ee09
       valid_lft forever preferred_lft forever
    inet6 fe80::be39:9159:c11d:cd78/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::e874:b9ff:fe77:bd27/64 scope link
       valid_lft forever preferred_lft forever
13: veth748c3ed@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether fa:24:2f:a4:79:64 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet 169.254.11.227/16 brd 169.254.255.255 scope global noprefixroute veth748c3ed
       valid_lft forever preferred_lft forever
    inet6 fe80::79aa:c861:4238:c93c/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::f824:2fff:fea4:7964/64 scope link
       valid_lft forever preferred_lft forever
15: veth196bd56@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 22:82:17:b7:35:0b brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet 169.254.28.57/16 brd 169.254.255.255 scope global noprefixroute veth196bd56
       valid_lft forever preferred_lft forever
    inet6 fe80::8424:1f9c:f9e4:83ed/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::2082:17ff:feb7:350b/64 scope link
       valid_lft forever preferred_lft forever
17: veth733a167@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ba:39:bf:c3:47:83 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet 169.254.143.251/16 brd 169.254.255.255 scope global noprefixroute veth733a167
       valid_lft forever preferred_lft forever
    inet6 fe80::d997:1ce6:dbb6:7aca/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::b839:bfff:fec3:4783/64 scope link
       valid_lft forever preferred_lft forever
19: veth0fcdc1c@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 66:7a:65:10:bd:81 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet 169.254.51.58/16 brd 169.254.255.255 scope global noprefixroute veth0fcdc1c
       valid_lft forever preferred_lft forever
    inet6 fe80::647a:65ff:fe10:bd81/64 scope link
       valid_lft forever preferred_lft forever
 
Last edited:
Can you show the contents of the /etc/network/interfaces file with the working, and not working settings? If you configure it via the GUI, there should be a interfaces.new file with the not yet applied changes. This will be the easiest way to check where there is a problem with the configuration.
 
Hi Aaron, thanks for your reply. Here are the two files

Code:
me@Cana ~> cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet static
    address 192.168.1.247/8
    gateway 192.168.1.1
Code:
me@Cana ~> cat /etc/network/interfaces.new
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet manual

auto vmbr0
iface vmbr0 inet static
    address 192.168.1.247/8
    gateway 192.168.1.1
    bridge-ports enp0s31f6
    bridge-stp off
    bridge-fd 0
 
Okay, that is actually not a bond but a bridge on top of the interface. Bonding is the combination of two NICs to one interface with some failover if one of the physical connections fails.

May I ask why you use a /8 network? The /8 means that only the first octet defines the network. But this is not conforming to the IP ranges designated for private use. The 192.168.0.0/16 is the largest network you can get in that range. But unless you really need something else, you are most likely using a /24 network where only the last octet defines the host.

The network config does look okay, except for the /8. Try to change that to at least /16 (but I suspect that your network is actually using a /24) and try it again.
 
Okay, that is actually not a bond but a bridge on top of the interface. Bonding is the combination of two NICs to one interface with some failover if one of the physical connections fails.
Ah, yes my mistake. I got the two mixed up. I did actually mean to make a bridge, not a bond

May I ask why you use a /8 network? The /8 means that only the first octet defines the network. But this is not conforming to the IP ranges designated for private use. The 192.168.0.0/16 is the largest network you can get in that range. But unless you really need something else, you are most likely using a /24 network where only the last octet defines the host.
That could of been the default in the ethernet port configuration, so I just left it there. Can't remember though, not that well versed in CIDR.

The network config does look okay, except for the /8. Try to change that to at least /16 (but I suspect that your network is actually using a /24) and try it again.
Patching the /etc/network/interfaces as follows still results in no network activity

Diff:
--- /etc/network/interfaces    2021-01-11 12:16:56.459876621 +0000
+++ /etc/network/interfaces.new    2021-01-13 12:31:55.684128011 +0000
@@ -15,7 +15,13 @@
 iface lo inet loopback
 
 auto enp0s31f6
-iface enp0s31f6 inet static
-    address 192.168.1.247/8
+iface enp0s31f6 inet manual
+
+auto vmbr0
+iface vmbr0 inet static
+    address 192.168.1.247/24
     gateway 192.168.1.1
+    bridge-ports enp0s31f6
+    bridge-stp off
+    bridge-fd 0
 
Can you ping the gateway 192.168.1.1 from the PVE node? Can you ping the PVE node from some other machine in the network?

Can you show the output of the command ip a?
 
Can you ping the gateway 192.168.1.1 from the PVE node?
Yes. I can ping devices on the local network from the PVE node including the gateway, and local devices on the network can access it (such as http interface), however it cannot be accessed from the internet and it cannot access the internet.

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 4c:ed:fb:66:3a:b8 brd ff:ff:ff:ff:ff:ff
4: br-333b0d95f295: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:aa:ab:93:3b brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-333b0d95f295
       valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:82:4c:8d:c3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet 169.254.83.180/16 brd 169.254.255.255 scope global noprefixroute docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::cf0:779c:4a47:2533/64 scope link
       valid_lft forever preferred_lft forever
11: veth065ee09@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ea:74:b9:77:bd:27 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 169.254.227.241/16 brd 169.254.255.255 scope global noprefixroute veth065ee09
       valid_lft forever preferred_lft forever
    inet6 fe80::be39:9159:c11d:cd78/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::e874:b9ff:fe77:bd27/64 scope link
       valid_lft forever preferred_lft forever
13: veth748c3ed@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether fa:24:2f:a4:79:64 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet 169.254.11.227/16 brd 169.254.255.255 scope global noprefixroute veth748c3ed
       valid_lft forever preferred_lft forever
    inet6 fe80::79aa:c861:4238:c93c/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::f824:2fff:fea4:7964/64 scope link
       valid_lft forever preferred_lft forever
15: veth196bd56@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 22:82:17:b7:35:0b brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet 169.254.28.57/16 brd 169.254.255.255 scope global noprefixroute veth196bd56
       valid_lft forever preferred_lft forever
    inet6 fe80::8424:1f9c:f9e4:83ed/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::2082:17ff:feb7:350b/64 scope link
       valid_lft forever preferred_lft forever
17: veth733a167@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ba:39:bf:c3:47:83 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet 169.254.143.251/16 brd 169.254.255.255 scope global noprefixroute veth733a167
       valid_lft forever preferred_lft forever
    inet6 fe80::d997:1ce6:dbb6:7aca/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::b839:bfff:fec3:4783/64 scope link
       valid_lft forever preferred_lft forever
19: veth0fcdc1c@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 66:7a:65:10:bd:81 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet 169.254.51.58/16 brd 169.254.255.255 scope global noprefixroute veth0fcdc1c
       valid_lft forever preferred_lft forever
    inet6 fe80::647a:65ff:fe10:bd81/64 scope link
       valid_lft forever preferred_lft forever
12055: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4c:ed:fb:66:3a:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.247/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fdad:39f:e769::dfe/128 scope global tentative noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fdad:39f:e769:0:1504:2fd4:9efb:ba2/64 scope global mngtmpaddr noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::cc17:91b2:1780:a021/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::4eed:fbff:fe66:3ab8/64 scope link
       valid_lft forever preferred_lft forever
 
Last edited:
Okay, so the network does work but then there is some other problem.

What does ip r show?

And why is there an interface named docker0 in the ip a output? Installing Docker directly on the PVE node is not supported and could be the reason why something is weird regarding networking.
 
What does ip r show?
Sorry for the late response but,
Code:
default via 192.168.1.1 dev enp0s31f6 src 192.168.1.247 metric 202
default via 192.168.1.1 dev vmbr0 proto dhcp src 192.168.1.80 metric 35428
169.254.0.0/16 dev docker0 scope link src 169.254.83.180 metric 205
169.254.0.0/16 dev veth065ee09 scope link src 169.254.227.241 metric 211
169.254.0.0/16 dev veth748c3ed scope link src 169.254.11.227 metric 213
169.254.0.0/16 dev veth196bd56 scope link src 169.254.28.57 metric 215
169.254.0.0/16 dev veth733a167 scope link src 169.254.143.251 metric 217
169.254.0.0/16 dev veth0fcdc1c scope link src 169.254.51.58 metric 219
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev br-333b0d95f295 proto kernel scope link src 172.18.0.1 linkdown
192.0.0.0/8 dev vmbr0 proto kernel scope link src 192.168.1.247
192.0.0.0/8 dev enp0s31f6 proto dhcp scope link src 192.168.1.247 metric 202
192.168.1.0/24 dev vmbr0 proto dhcp scope link src 192.168.1.80 metric 35428

And why is there an interface named docker0 in the ip a output? Installing Docker directly on the PVE node is not supported and could be the reason why something is weird regarding networking.
That could be the issue, and while I'd rather to keep the services kept Docker, I can work around that using LXC.
 
Last edited:
Have you tried rebooting the server? Because the routing table looks quite messy and convoluted. The following lines shouldn't be there at all since that network isn't configured anymore.
Code:
192.0.0.0/8 dev vmbr0 proto kernel scope link src 192.168.1.247
192.0.0.0/8 dev enp0s31f6 proto dhcp scope link src 192.168.1.247 metric 202

These lines are n conflict:
Code:
default via 192.168.1.1 dev enp0s31f6 src 192.168.1.247 metric 202
default via 192.168.1.1 dev vmbr0 proto dhcp src 192.168.1.80 metric 35428

There should only be one default route, and it should use the vmbr0.

That could be the issue, and while I'd rather to keep the services kept Docker, I can work around that using LXC.
If you want to use docker, you can run it inside a VM. I think some people also were successful to run it inside an LXC container. But running docker bare metal next to PVE will be problematic as you now have two toolsets that want to configure stuff on the machine directly.

I suspect that your network issues stem from this.
 
If you want to use docker, you can run it inside a VM. I think some people also were successful to run it inside an LXC container. But running docker bare metal next to PVE will be problematic as you now have two toolsets that want to configure stuff on the machine directly.

I suspect that your network issues stem from this.
I removed the Docker network, leaving ip a as the following

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 4c:ed:fb:66:3a:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.247/8 brd 192.255.255.255 scope global noprefixroute enp0s31f6
       valid_lft forever preferred_lft forever
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4c:ed:fb:66:3a:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.247/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet 192.168.1.80/24 brd 192.168.1.255 scope global secondary noprefixroute vmbr0
       valid_lft forever preferred_lft forever
    inet6 fdad:39f:e769::dfe/128 scope global noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fdad:39f:e769:0:1504:2fd4:9efb:ba2/64 scope global mngtmpaddr noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::cc17:91b2:1780:a021/64 scope link
       valid_lft forever preferred_lft forever

and my ip r as the following
Code:
default via 192.168.1.1 dev enp0s31f6 src 192.168.1.247 metric 202
default via 192.168.1.1 dev vmbr0 proto dhcp src 192.168.1.80 metric 203
192.0.0.0/8 dev enp0s31f6 proto dhcp scope link src 192.168.1.247 metric 202
192.168.1.0/24 dev vmbr0 proto dhcp scope link src 192.168.1.80 metric 203
 
Is there some other service running that might affect the network config like network manager?
 
Checking installed packages ( dpkg -l ).

Can you show the /etc/network/interfaces file and any files present in /etc/network/interfaces.d/ ? Maybe there is something we missed.
 
Checking installed packages ( dpkg -l ).
I didn't find anything there, but just incase I've missed anything I've put the entire listing here https://pastebin.com/raw/7pD5M7wU
Can you show the/etc/network/interfaces file and any files present in /etc/network/interfaces.d/ ? Maybe there is something we missed.
Code:
me@Cana ~> sudo cat /etc/networks^C
me@Cana ~> sudo cat /etc/network/interfaces
[sudo] password for me:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet static
    address 192.168.1.247/24
    gateway 192.168.1.1

me@Cana ~> ls -R /etc/network
/etc/network:
if-down.d/       if-pre-up.d/  ifupdown2/  interfaces.d/
if-post-down.d/  if-up.d/      interfaces  run@

/etc/network/if-down.d:
postfix*

/etc/network/if-post-down.d:
bridge@  ifenslave*  vlan*

/etc/network/if-pre-up.d:
bridge@  ifenslave*  vlan*

/etc/network/if-up.d:
bridgevlan*  bridgevlanport*  ifenslave*  mtu*  postfix*

/etc/network/ifupdown2:
addons.conf  ifupdown2.conf  policy.d/

/etc/network/ifupdown2/policy.d:

/etc/network/interfaces.d:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!