[SOLVED] Configured bridge isn't coming up at Proxmox VE start.

Julien_Fremont

New Member
Oct 14, 2022
4
0
1
Hello everyone.

I have a problem with a cluster that I'm currently building from scratch. This is a small 2-node cluster installed in 7.2-1 and upgraded to 7.2-11 via the non-production ProxMox repositories. The 2 nodes are identical and have 4 networks configured:

  • eno3/vmbr0 : Administration network (WebUI, ssh, etc).
  • eno2 : Internode communications
  • enp4s0f0 : External CEPH cluster access
  • eno1/vmbr1 : VLAN network access for the VMs

The first 3 works fine, but eno1/vmbr1 doesn't work properly. From both a reboot, a cold start or the application of networks configuration of the nodes, eno1 doesn’t come UP and vmbr1 doesn’t show in the ‘ip a’ command:

root@abeehouse-node-1:~# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff altname enp2s0f0 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 18:66:da:93:e7:9d brd ff:ff:ff:ff:ff:ff altname enp2s0f1 inet 10.40.5.20/24 scope global eno2 valid_lft forever preferred_lft forever inet6 fe80::1a66:daff:fe93:e79d/64 scope link valid_lft forever preferred_lft forever 4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000 link/ether 18:66:da:93:e7:9e brd ff:ff:ff:ff:ff:ff altname enp3s0f0 5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 18:66:da:93:e7:9f brd ff:ff:ff:ff:ff:ff altname enp3s0f1 6: enp4s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether a0:36:9f:e9:74:90 brd ff:ff:ff:ff:ff:ff inet 10.40.7.20/24 scope global enp4s0f0 valid_lft forever preferred_lft forever inet6 fe80::a236:9fff:fee9:7490/64 scope link valid_lft forever preferred_lft forever 7: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether a0:36:9f:e9:74:92 brd ff:ff:ff:ff:ff:ff 8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 18:66:da:93:e7:9e brd ff:ff:ff:ff:ff:ff inet 10.40.3.20/24 scope global vmbr0 valid_lft forever preferred_lft forever inet6 fe80::1a66:daff:fe93:e79e/64 scope link valid_lft forever preferred_lft forever


For some reason, ethtool sees eno1 as disconnected:

root@abeehouse-node-1:~# ethtool eno1 Settings for eno1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: Unknown! Duplex: Unknown! (255) Auto-negotiation: on Port: Twisted Pair PHYAD: 1 Transceiver: internal MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: no

However, I can bring the interface UP manually without issues. Vmbr1 is still missing, however:

root@abeehouse-node-1:~# ip link set eno1 up root@abeehouse-node-1:~# ip link show eno1 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff altname enp2s0f0 root@abeehouse-node-1:~# ip link show vmbr1 Device "vmbr1" does not exist.

Then, I can start the bridge manually without problem:

root@abeehouse-node-1:~# ifup vmbr1 root@abeehouse-node-1:~# ip link show vmbr1 9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff

If I start the interface and bridge manually, the VMs work without problem. Otherwise, the VMs fail to start as vmbr1 does not exist:

bridge 'vmbr1' does not exist kvm: -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 512 TASK ERROR: start failed: QEMU exited with code 1

The problem is the same on both nodes (same config) and strangely, if I configure eno1 with just an IP without a bridge, it UPs well at boot or after the configurations. So it doesn't seem to be a hardware issue.

From my searches, I come up to a post from 2020 showing a similar problem, linked to VLAN awareness and ifupdown2 :

https://forum.proxmox.com/threads/bridge-cant-be-found-and-vm-failed-to-start.63138/

However, in my case enabling or not the VLAN awareness doesn’t change anything and as long I know, ProxMox 7.2 comes with ifupdown2 out of the box. I have a similar (single node) setup at home and the VLAN bridge works without problems, so if someone can help me figure it out why eno1/vmbr1 doesn’t start automatically I will be grateful.

Here is some additional information :

Networks config :
root@abeehouse-node-1:~# cat /etc/network/interfaces # network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage parts of the network configuration manually, # please utilize the 'source' or 'source-directory' directives to do # so. # PVE will preserve these directives, but will NOT read its network # configuration from sourced files, so do not attempt to move any of # the PVE managed interfaces into external files! auto lo iface lo inet loopback auto eno3 iface eno3 inet manual #ADMIN ACCESS PORT auto eno1 iface eno1 inet manual #VLAN ACCESS PORT auto eno2 iface eno2 inet static address 10.40.5.20/24 #Intercom abeehouse. /!\ NE PAS UTILISER POUR LES VM /!\ iface eno4 inet manual #NOT USED auto enp4s0f0 iface enp4s0f0 inet static address 10.40.7.20/24 mtu 9000 #Acces à honeycomb. /!\ NE PAS UTILISER POUR LES VM /!\ iface enp4s0f1 inet manual #NOT USED auto vmbr0 iface vmbr0 inet static address 10.40.3.20/24 gateway 10.40.3.254 bridge-ports eno3 bridge-stp off bridge-fd 0 #Acces réseau admin. /!\ NE PAS UTILISER POUR LES VM /!\ auto vmbr1 iface vmbr1 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #ACCES VLAN POUR VM

Kernel version :
root@abeehouse-node-1:~# uname -a Linux abeehouse-node-1 5.15.60-2-pve #1 SMP PVE 5.15.60-2 (Tue, 04 Oct 2022 16:52:28 +0200) x86_64 GNU/Linux

Ethernet cards :
root@abeehouse-node-1:~# lspci | grep Ethernet 02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) 04:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
 
Hi,

Does the "ifreload -a" say anything?

Can you please also post the output of dmesg | grep eno1?


2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff altname enp2s0f0
From the output of `ip a` I can see the old name is `enp2s0f0` have you edited the interface's names for enoX?
 
I tried from a fresh reboot of the first node, "ifreload -a" doesn't seems to do anything :

root@abeehouse-node-1:~# ifreload -a
root@abeehouse-node-1:~# ip link show eno1
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
root@abeehouse-node-1:~# ip link show vmbr1
Device "vmbr1" does not exist.
root@abeehouse-node-1:~#
It also returns nothing after manually restarting the interface and the bridge.

For the dmesg log (also from a fresh reboot of the node) :
root@abeehouse-node-1:~# dmesg | grep eno1
[ 3.591806] tg3 0000:02:00.0 eno1: renamed from eth0
Nothing more strangely, unlike the other interfaces.

About the interface name, no I didn't edit it. I don't know from were "enp2s0f0" name come from.
 
I have done some additional tests, as I have physical access to the two servers. I can’t do much with the second one, but no VM is running on the first server so I can troubleshoot. So, I removed the vmbr1 and eno1 config from the WebUI, applied it and rebooted the node. Here is what /etc/network/interfaces looks like :

root@abeehouse-node-1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno3 inet manual
#ADMIN ACCESS PORT

iface eno1 inet manual
#VLAN ACCESS PORT

auto eno2
iface eno2 inet static
address 10.40.5.20/24
#Intercom abeehouse. /!\ NE PAS UTILISER POUR LES VM /!\

iface eno4 inet manual
#NOT USED

auto enp4s0f0
iface enp4s0f0 inet static
address 10.40.7.20/24
mtu 9000
#Acces à honeycomb. /!\ NE PAS UTILISER POUR LES VM /!\

iface enp4s0f1 inet manual
#NOT USED

auto vmbr0
iface vmbr0 inet static
address 10.40.3.20/24
gateway 10.40.3.254
bridge-ports eno3
bridge-stp off
bridge-fd 0
#Acces réseau admin. /!\ NE PAS UTILISER POUR LES VM /!\

After reboot, eno1 is down as expected, because there is no config associated with it:

root@abeehouse-node-1:~# ethtool eno1
Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
MDI-X: Unknown
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
drv probe link timer ifdown ifup rx_err tx_err
Link detected: no
root@abeehouse-node-1:~# ip link show eno1
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
altname enp2s0f0

Then I again configured and applied vmbr1 via WebUI. Here is the resulting /etc/network/interfaces:

root@abeehouse-node-1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno3 inet manual
#ADMIN ACCESS PORT

iface eno1 inet manual
#VLAN ACCESS PORT

auto eno2
iface eno2 inet static
address 10.40.5.20/24
#Intercom abeehouse. /!\ NE PAS UTILISER POUR LES VM /!\

iface eno4 inet manual
#NOT USED

auto enp4s0f0
iface enp4s0f0 inet static
address 10.40.7.20/24
mtu 9000
#Acces à honeycomb. /!\ NE PAS UTILISER POUR LES VM /!\

iface enp4s0f1 inet manual
#NOT USED

auto vmbr0
iface vmbr0 inet static
address 10.40.3.20/24
gateway 10.40.3.254
bridge-ports eno3
bridge-stp off
bridge-fd 0
#Acces réseau admin. /!\ NE PAS UTILISER POUR LES VM /!\

auto vmbr1
iface vmbr1 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

So far so good, but eno1 is still down and vmbr1 is still unknown to the ip command :

root@abeehouse-node-1:~# ethtool eno1
Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
MDI-X: Unknown
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
drv probe link timer ifdown ifup rx_err tx_err
Link detected: no
root@abeehouse-node-1:~# ip link show vmbr1
Device "vmbr1" does not exist.

If I try to start a VM anyway, ProxMox logically says that vmbr1 doesn’t exist :

bridge 'vmbr1' does not exist
kvm: -netdev type=tap,id=net0,ifname=tap102i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 512
TASK ERROR: start failed: QEMU exited with code 1

Then if I manually start the bridge with ifup, the command returns nothing but the bridge comes up. Eno1 is still down though:

root@abeehouse-node-1:~# ifup vmbr1
root@abeehouse-node-1:~# ip link show vmbr1
10: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
root@abeehouse-node-1:~# ip link show eno1
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master vmbr1 state DOWN mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
root@abeehouse-node-1:~#

I can start the VM at that point, but obviously it didn't get an IP from my (external) DHCP server. Starting the VM doesn't change anything for eno1:

root@abeehouse-node-1:~# ip link show vmbr1
10: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
root@abeehouse-node-1:~# ip link show eno1
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master vmbr1 state DOWN mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
root@abeehouse-node-1:~#

Then, if I manually start eno1, the VM can get its IP with no problem:

root@abeehouse-node-1:~# ip link set eno1 up
root@abeehouse-node-1:~# ip link show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
root@abeehouse-node-1:~#

So, the configuration by itself seems to be fine, but doesn't automatically come UP. I then tried to reboot the ProxMox node. Nothing seems to come UP :

root@abeehouse-node-1:~# dmesg | grep eno1
[ 6.535417] tg3 0000:02:00.0 eno1: renamed from eth0
root@abeehouse-node-1:~# dmesg | grep vmbr1
root@abeehouse-node-1:~# ip link show eno1
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 18:66:da:93:e7:9c brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
root@abeehouse-node-1:~# ip link show vmbr1
Device "vmbr1" does not exist.
root@abeehouse-node-1:~#

If I start the interface and bridge manually, everything run fine again.

Then I deleted vmbr1 and restarted the server. This time I plugged the ethernet cable into the second port of my PCI-E network card (enp4s0f1), recreated vmbr1 with that port as a slave with web UI and applied it . Here is the output /etc/network/interfaces:

root@abeehouse-node-1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno3 inet manual
#ADMIN ACCESS PORT

iface eno1 inet manual
#VLAN ACCESS PORT

auto eno2
iface eno2 inet static
address 10.40.5.20/24
#Intercom abeehouse. /!\ NE PAS UTILISER POUR LES VM /!\

iface eno4 inet manual
#NOT USED

auto enp4s0f0
iface enp4s0f0 inet static
address 10.40.7.20/24
mtu 9000
#Acces à honeycomb. /!\ NE PAS UTILISER POUR LES VM /!\

iface enp4s0f1 inet manual
#NOT USED

auto vmbr0
iface vmbr0 inet static
address 10.40.3.20/24
gateway 10.40.3.254
bridge-ports eno3
bridge-stp off
bridge-fd 0
#Acces réseau admin. /!\ NE PAS UTILISER POUR LES VM /!\

auto vmbr1
iface vmbr1 inet manual
bridge-ports enp4s0f1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

root@abeehouse-node-1:~#

enp4s0f1 stays down and vmbr1 still doesn't exist according to the ip command:

root@abeehouse-node-1:~# ip link show enp4s0f1
7: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether a0:36:9f:e9:74:92 brd ff:ff:ff:ff:ff:ff
root@abeehouse-node-1:~# ip link show vmbr1
Device "vmbr1" does not exist.
root@abeehouse-node-1:~#

This time though, if I manually start vmbr1 with ifup, enp4s0f1 comes up with it:

root@abeehouse-node-1:~# ifup vmbr1
root@abeehouse-node-1:~# ip link show vmbr1
9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether a0:36:9f:e9:74:92 brd ff:ff:ff:ff:ff:ff
root@abeehouse-node-1:~# ip link show enp4s0f1
7: enp4s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000
link/ether a0:36:9f:e9:74:92 brd ff:ff:ff:ff:ff:ff
root@abeehouse-node-1:~#

I can start a VM with this bridge and it can get an IP address via DHCP, so the network works without manually starting enp4s0f1. However, if I restart the server again...

root@abeehouse-node-1:~# dmesg | grep enp4s0f1
[ 6.695286] ixgbe 0000:04:00.1 enp4s0f1: renamed from eth5
root@abeehouse-node-1:~# dmesg | grep vmbr1
root@abeehouse-node-1:~# ip link show enp4s0f1
7: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether a0:36:9f:e9:74:92 brd ff:ff:ff:ff:ff:ff
root@abeehouse-node-1:~# ip link show vmbr1
Device "vmbr1" does not exist.
root@abeehouse-node-1:~#

Everything is down again. I have also tried swapping the Ethernet cable with a known working one and got the same results.

Manually starting eno1 and vmbr1 is fine for now, but I'm wondering if something deeper is wrong with those servers before putting this small cluster into production. If anyone has any idea what's going on, I'll gladly take advice.
 
Hi there.

Since my last post, I encountered the same problem on another set of ProxMox servers intended for another cluster, but this time I managed to find the source of the problem.

It turns out that if you put /!\ (probably the "\" character in particular) in the comments of any interface, it messes up the autostart of interfaces following it in the config file. This seems to happen whether you do it from the web interface or by directly editing the /etc/network/interfaces file.

So, I solved my problem. But I don't think this is an expected behavior (what's in a commented out line shouldn't matter), so I think someone should take a look at it someday . I don't know if the problem is in ifupdown of ProxMox itself however.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!