extra VLAN bridges on top on bonded network in a PVE cluster

stefws

Renowned Member
Jan 29, 2015
302
4
83
Denmark
siimnet.dk
So I've got two physical networks working on my PVE cluster, one public facing over eth0 and one private closed backend network bonded over two NICs eth1+eth2 connected to two separate interconnected switches.
(Sorry for the poor ASCII art quality :)

PVEnetworks.png

Over the private untagged backend network vmbr1 I run Ceph Storage which works fine.
EX Juniper Switch 1 & 2 have all ports in default access mode.. I believe.
Bond1 seems working if I pull one cable from a switch, a ping continue to work across the other switch as expected.

Now I also want to use this physical closed bonded network for various tagged thus private Inter Application Communication vLAN.
So I created 3x extra switches vmbr20-40 on top of bond1, only I dont seem to get those networks working between cluster nodes and wondering what I do wrong. Properly something needs to define layer 2 'routing' between these bridges or what ever...

Appreciate any advice on my mistakes here, TIA!


root@node4:~# ping node3.ceph
PING node3.ceph (10.0.3.3) 56(84) bytes of data.
64 bytes from 10.0.3.3: icmp_req=1 ttl=64 time=0.192 ms
64 bytes from 10.0.3.3: icmp_req=2 ttl=64 time=0.211 ms
^C
--- node3.ceph ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.192/0.201/0.211/0.017 ms

root@node4:~# ping 10.20.0.3
PING 10.20.0.3 (10.20.0.3) 56(84) bytes of data.
^C
--- 10.20.0.3 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

root@node4:~# ping 10.30.0.3
PING 10.30.0.3 (10.30.0.3) 56(84) bytes of data.
^C
--- 10.30.0.3 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

root@node4:~# ping 10.40.0.3
PING 10.40.0.3 (10.40.0.3) 56(84) bytes of data.
^C
--- 10.40.0.3 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms


root@node4:~# brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.001b7894055a no eth0
vmbr1 8000.001b78940558 no bond1
vmbr20 8000.001b78940558 no bond1.20
vmbr30 8000.001b78940558 no bond1.30
vmbr40 8000.001b78940558 no bond1.40


got these bridges:

vmbr0 Link encap:Ethernet HWaddr 00:1b:78:94:05:5a
inet addr:xx.xx.xx.xx Bcast:xx.xx.xx.31 Mask:255.255.255.224
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18066632 errors:0 dropped:0 overruns:0 frame:0
TX packets:14748051 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6041989878 (5.6 GiB) TX bytes:5542030644 (5.1 GiB)

vmbr1 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.0.3.4 Bcast:10.0.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10107445 errors:0 dropped:0 overruns:0 frame:0
TX packets:9768673 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5768727202 (5.3 GiB) TX bytes:5302442244 (4.9 GiB)

vmbr20 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.20.0.4 Bcast:10.20.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:502 errors:0 dropped:0 overruns:0 frame:0
TX packets:833 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:160372 (156.6 KiB) TX bytes:66816 (65.2 KiB)

vmbr30 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.30.0.4 Bcast:10.30.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)

vmbr40 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.40.0.4 Bcast:10.40.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)



root@node4:~# cat /etc/network/interfaces
# network interface settings

auto bond1.20
iface bond1.20 inet manual
vlan-raw-device bond1

auto bond1.30
iface bond1.30 inet manual
vlan-raw-device bond1

auto bond1.40
iface bond1.40 inet manual
vlan-raw-device bond1

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual

auto bond1
iface bond1 inet manual
slaves eth1 eth2
bond_miimon 100
bond_mode 802.3ad

# Pub NIC/Switch
auto vmbr0
iface vmbr0 inet static
address xx.xx.xx.xx
netmask 255.255.255.224
gateway xx.xx.xx.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
# I know :) but that's speed I got so far
post-up /sbin/ethtool -s eth0 speed 100 duplex full autoneg off

# Ceph Storage Network 1Gbs bonded
auto vmbr1
iface vmbr1 inet static
address 10.0.3.4
netmask 255.255.255.0
bridge_ports bond1
bridge_stp off
bridge_fd 0

# Inter Application Network #1
auto vmbr20
iface vmbr20 inet static
address 10.20.0.4
netmask 255.255.0.0
bridge_ports bond1.20
bridge_stp off
bridge_fd 0

# Inter Application Network #2
auto vmbr30
iface vmbr30 inet static
address 10.30.0.4
netmask 255.255.0.0
bridge_ports bond1.30
bridge_stp off
bridge_fd 0

# Inter Application Network #3
auto vmbr40
iface vmbr40 inet static
address 10.40.0.4
netmask 255.255.0.0
bridge_ports bond1.40
bridge_stp off
bridge_fd 0
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!