[TUTORIAL] Connect vmbr0 to vmbr1 - veth peer tunnel

cave

Renowned Member
Feb 18, 2016
81
14
73
Vienna
Hi,

I have two nodes with different quantity of NIC's.

Let's assume PVE1 has only a single NIC eno1 which is connected to vmbr0
PVE2 has two NIC's eno1 and eno2, which are connected to vmbr0 and vmbr1.

1736431211135.png

I'd like to connect on PVE1 the vmbr1 bridge to vmbr0.
So i could run a VM on PVE2 and can easily migrate it to PVE1, without having the need to change the network bridge setting in the VM to have it boot.
vmbr1 setting would be "aliased" on the host without a second NIC and second bridge.

I was already reading and following the advice from this thread:


I have achieved to setup a veth-peer tunnel regarding the veth manpage : https://man7.org/linux/man-pages/man4/veth.4.html



1736433086608.png
But it's not yet connected to the bridges.

root@pve1:~# cat /etc/network/interfaces.d/veth_peer
auto veth_vmbr0
iface veth_vmbr0 inet manual
link-type veth
veth-peer-name veth_vmbr1

auto veth_vmbr1
iface veth_vmbr1 inet manual
link-type veth
veth-peer-name veth_vmbr0

root@pve1:~# ifconfig | grep veth_vmbr
veth_vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
veth_vmbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

root@pve1:~# ifconfig
...
veth_vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::8828:1dff:xxxx:4ed1 prefixlen 64 scopeid 0x20<link>
ether 8a:28:1d:f9:xx:xx txqueuelen 1000 (Ethernet)
RX packets 167 bytes 11786 (11.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 170 bytes 11996 (11.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth_vmbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::94bc:caff:xxxx:8003 prefixlen 64 scopeid 0x20<link>
ether 96:bc:ca:91:xx:xx txqueuelen 1000 (Ethernet)
RX packets 170 bytes 11996 (11.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 167 bytes 11786 (11.5 KiB)
...


root@pve1:~# ip a show veth_vmbr0
54: veth_vmbr0@veth_vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8a:28:1d:f9:xx:xx brd ff:ff:ff:ff:ff:ff
inet6 fe80::8828:1dff:xxxx:4ed1/64 scope link
valid_lft forever preferred_lft forever
root@pve1:~# ip a show veth_vmbr1
53: veth_vmbr1@veth_vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 96:bc:ca:91:xx:xx brd ff:ff:ff:ff:ff:ff
inet6 fe80::94bc:caff:xxxx:8003/64 scope link
valid_lft forever preferred_lft forever

root@pve1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual


auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

...

auto vmbr1
iface vmbr1 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094


If a Linux Bridge can be considered a "virtual" Switch.
Then a veth-peer-tunnel is the equivalent of a "patch"-cable to connect those "Switches". (or CrossOver Cable so to say)


What is now the correct way to attach the veth interfaces to the bridge-ports setting in the vmbr0 and vmbr1.

May i connect on vmbr0 two devices (eno1 & veth_vmbr0) to bridge-ports? And on vmbr1 only veth_vmbr1 ?
How to achieve that correctly without breaking PVE Host Networking in the file /etc/network/interfaces ?

1736433365586.png
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1 veth_vmbr0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

...

auto vmbr1
iface vmbr1 inet manual
bridge-ports veth_vmbr1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
 
I don't get why you want to connect two bridges. I would just get rid of vmbr1 and connect all to vmbr0. And a bridge is not a virtual switch. If you would like to have virtual switches, look at OpenVSwitch, which does exactly what you want ... a switch.
 
I don't get why you want to connect two bridges.
I want to use the node as a failover node. Thats why i mentioned i want to migrate them. thats not possible, if the network settings are not identical.

ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"
For bigger clusters, it makes sense to define a more detailed failoverbehavior. For example, you may want to run a set of services onnode1 if possible. If node1 is not available, you want to run themequally split on node2 and node3. If those nodes also fail, theservices should run on node4. To achieve this you could set the nodelist to:

# ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"

The nodes with multiple dedicated NIC's should run the VM on eno2/vmbr1/bond0/LACP. HA/Replication is on identical nodes not a problem.
As a last resort, it should run also on eno1/vmbr0/vmbr1 with different, less or only one NIC.


would just get rid of vmbr1
"just don't use the NIC" if i want to use a NIC is not a viable answer. :rolleyes:

And a bridge is not a virtual switch.
Please check the official PVE-Docu regarding bridges.

Default Configuration using a Bridge

Bridges are like physical network switches implemented in software.
 
Last edited:
@Chris @fabian @t.lamprecht

a) i got it working
b) please move the thread to Proxmox VE: Networking and Firewall
c) I'll complete the thread into a Tutorial

I just did have to Ifdown, change, ifup. Just needed an on top Out-of-Band NIC to keep access.

I have now my vm/lxc connected to vmbr1 and receive from my DHCP-IP on the VLAN.
1736455234866.png

1736455347529.png

so, i'd consider it success.

1736455171254.png

i did iperf3 test the connection already, works perfect.



I did some cleanup and wanted to remove unused vmbr0.15 from the network settings.
1736455488468.png

That means, i cannot use the convenient PVE webinterface with ifupdown2 for the network config anymore. :(
1736455520979.png


So i have to manually touch /etc/network/interfaces and remove/comment the Linux VLAN section.
root@pve1:~# ifdown vmbr0
root@pve1:~# vi /etc/network/interfaces
root@pve1:~# ifup vmbr0

Is it possible to fix that?
Do i have to move the veth_peer section from /etc/network/interfaces.d/veth_peer directly into /etc/network/interfaces so it is read by PVE and is resolved?


Feature Request:
I know it's not that important, but it's nice to have and probably a low hanging fruit and also a basic functionality. I hope it's documented in this thread well enough.
1736456263152.png 1736456370935.png
Please add veth-peer functionality to that section. So veth devices can easily be created and added to the Ports/Slaves (vmbr bridge-ports) section.


Network Settings with vmbr0 <-veth-tunnel-> vmbr1 did surve a reboot with success.


Negative Side Effect: VM/LXC on vmbr0 have lost connection. Moving them to vmbr1 did work.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!