vmbr interface does not show up in GUI

jHGbc

New Member
Dec 11, 2018
4
0
1
37
Hi everyone,

I am fighting with the network setup at the moment. The thing we need to do is a active-backup bond0 with a native vlan and vlan5.

So what I did is the straight forward way of setting up a bundle of the physical interfaces (bond0) then added a vlan interface (bond0.5) and attached a bridge to that interface (vmbr5).

But neither the bond0, bond0.5 or the vmbr5 interface show up in the GUI. Only the physical links are visible and active.

The whole config is autogenerated via Ansible and resides in /etc/network/interfaces.d/

Code:
auto bond0
iface bond0 inet static
mtu 1500
address 10.10.111.18
netmask 255.255.255.0
gateway 10.10.111.1
dns-nameservers 8.8.8.8 8.8.4.4
dns-search domain.tld
bond-mode active-backup
bond-miimon 100
bond-slaves none

auto bond0.5
iface bond0.5  inet manual
vlan-raw-device bond0

auto enp11s0f0
iface enp11s0f0 inet manual
bond-master bond0

auto enp11s0f1
iface enp11s0f1 inet manual
bond-master bond0

auto ib0
iface ib0 inet static
address 172.16.2.2
netmask 255.255.255.0


auto vmbr5
iface vmbr5 inet manual
bridge_ports bond0.5
bridge_stp False
bridge_fd 0

All interfaces are up according to ip a

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp11s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether e4:1f:13:f0:c9:14 brd ff:ff:ff:ff:ff:ff
3: enp11s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether e4:1f:13:f0:c9:14 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e4:1f:13:f0:c9:14 brd ff:ff:ff:ff:ff:ff
    inet 10.10.111.18/24 brd 10.10.111.255 scope global bond0
       valid_lft forever preferred_lft forever
5: enp0s29f1u2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e6:1f:13:f0:c9:18 brd ff:ff:ff:ff:ff:ff
6: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast state UP group default qlen 256
    link/infiniband 80:00:00:03:fe:80:00:00:00:00:00:00:00:11:75:00:00:77:f6:56 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
    inet 172.16.2.2/24 brd 172.16.2.255 scope global ib0
       valid_lft forever preferred_lft forever
    inet6 fe80::211:7500:77:f656/64 scope link
       valid_lft forever preferred_lft forever
7: bond0.5@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr5 state UP group default qlen 1000
    link/ether e4:1f:13:f0:c9:14 brd ff:ff:ff:ff:ff:ff
8: vmbr5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e4:1f:13:f0:c9:14 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e61f:13ff:fef0:c914/64 scope link
       valid_lft forever preferred_lft forever

So I don't really get the problem here.
Thank you very much for your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!