Multiple brigdes

Jun 6, 2024
17
0
1
I'd set up two bond interfaces bond0 (2 nics) and bond1 (other 2 nics) to divide traffic from my VMs to different interfaces.
I connected vmbr0 to bond0 (for management and some VMs), then created vmbr1 connected to bond1.
If I test vmbr1 with a static IP address from network, it works fine, but when I connect a guest to vmbr1, guest doesn't get network access.bridges.jpg

network.jpg


This is my network intended layout:

layout.jpg
 
Hey,

your switch supports and has LACP enabled, right? Also vmbr0 and vmbr1 have IPs in the same subnet, is this intentional? How would the VM get an IP, is it static or do you have a DHCP running on 172.16.0.0/16? Can you ping your router from the PVE host?
 
  • Like
Reactions: Spaneta
Yes they are in same subnet on porpoise.
How I said if I set a IP on PvE host on vmbr 0 or 1, I can ping them.
But when I set vmbr1 to the guest network interface, VM guest is not able to connect to the network, static or DHCP.
 
hi, you have tag=1 on your VM-iface on vmbr1 (I would not use VLAN1 tagged)
AND vlan 1 ist default untagged vlan of vmbrX bridges, maybe this is the problem...
 
I would check a few points :
- Make the vmbr's VLAN aware (should not impact non-tagged virtual-if)
- How are tagged/untagged the ports on the switch side ?
- I would advise against a giant /16 LAN, you could make multiple /24 VLAN's dividing your local /16
- What is the purpose of having 2 vvmbr's with different IP's in the same /16 range ? Maybe there is a better solution to the initial needs ?

Have a nice day,

EDIT: also do not forget to give answers to the other questions from Hannes Laimer
 
Last edited:
I would check a few points :
- Make the vmbr's VLAN aware (should not impact non-tagged virtual-if)
- How are tagged/untagged the ports on the switch side ?
- I would advise against a giant /16 LAN, you could make multiple /24 VLAN's dividing your local /16
- What is the purpose of having 2 vvmbr's with different IP's in the same /16 range ? Maybe there is a better solution to the initial needs ?

Have a nice day,

EDIT: also do not forget to give answers to the other questions from Hannes Laimer
Hi,
The network as more then 254 devices, so, a /24 subnet would only force a routed network, when the intend is to have 1 plain network with better performance.
The porpouse as I said in the beginning is balance network cards load to VM's.
Instead of have 8 VM's on the same network card, I will put 4 VM's on one card (bond) e 4 VM's in another.

I came from a VMWare environment, and this is a simple task to do, create multiple vmnet each one assign to some nic's.
PERHAPS proxmox doesn't support multiple vmbrs.
 
Using 2 vmbr's each with its bond is perfectly possible with Proxmox although the config will differ based on scenario/purpose.
There must be a (maybe minor) difference somewhere between the 2 bonds, either on the vmbr's side or on the switch side.
Can you show the content of /etc/network/interfaces ?
What is the config of the 4 ports on the switch side ?

Also another option - if all NIC's have same speed and all VM's (bandwidth) needs are supposed equal - is to put 4 NIC's in 1 bond, linked to vmbr0.
 
  • Like
Reactions: Zax
Hey,

your switch supports and has LACP enabled, right? Also vmbr0 and vmbr1 have IPs in the same subnet, is this intentional? How would the VM get an IP, is it static or do you have a DHCP running on 172.16.0.0/16? Can you ping your router from the PVE host?
I think some people didn't understood the ping test I reported:

Let's try this:
HOST:
vmbr0 - 172.16.0.38 - > pinging OK
vmbr1 - 172.16.1.1 -> pinging OK

GUEST:
Same guest with DHCP:
network on vmbr0: DHCP works.
network on vmbr1: DHCP doesn't work (cannot get IP address)
Same guest on static IP (172.16.1.2)
network on vmbr0: pinging OK
network on vmbr1: doesn't ping.

Did try with linux guest and windows guest, both work the same.
Seams like proxmox doesn't let vmbr1 communicate between guest and network.
Using 2 vmbr's each with its bond is perfectly possible with Proxmox although the config will differ based on scenario/purpose.
There must be a (maybe minor) difference somewhere between the 2 bonds, either on the vmbr's side or on the switch side.
Can you show the content of /etc/network/interfaces ?
What is the config of the 4 ports on the switch side ?

Also another option - if all NIC's have same speed and all VM's (bandwidth) needs are supposed equal - is to put 4 NIC's in 1 bond, linked to vmbr0.
 
Dear,

I think we correctly understood what you tested and what where the results (symptoms), but without more information about the config (possible causes) it seems difficult to narrow down the root cause.

Can you please provide :
- content of /etc/network/interfaces
- config of each of the 4 switch ports connected to the host

With kind regards,
 
  • Like
Reactions: ademirk
unfortunately, my Unifi switchs only support 2 ports LAGs, so it's a need to put those bonds 2 by 2 nics.
I understand there's no problem with the bond, because when I set a IP address on vmbr host I can ping it, and also, if I got no error on switch

Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
[QUOTE="Spaneta, post: 675861, member: 245884"]
Using 2 vmbr's each with its bond is perfectly possible with Proxmox although the config will differ based on scenario/purpose.
There must be a (maybe minor) difference somewhere between the 2 bonds, either on the vmbr's side or on the switch side.
Can you show the content of /etc/network/interfaces ?
What is the config of the 4 ports on the switch side ?

Also another option - if all NIC's have same speed and all VM's (bandwidth) needs are supposed equal - is to put 4 NIC's in 1 bond, linked to vmbr0.
[/QUOTE]
FOUND THE ISSUE!!
Sorry, it was a cable issue. Someone change the nics connected, so when I did check on switch interface status again, lag2 was down.
Now it seams to work.

the issue with proxmox not showing connected interfaces cause it very hard to identify nics connected and available to configuration. In my case I have 10 nics, in 3 different interfaces (2 with 4 nics, and one with 2 nics). I did identify them by finding MAC address on connected ports on idrac.
 
Good news your problem is identified and solved.

In this type of case the command "ip a" (short for "ip address") can help identify the port / bond / MAC's.
Example if I unplug one of my 2 NIC's at home ...
Code:
admin@pve1:~$ ip a
1: redacted - LOOPBACK - don't care
2: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ba:ba:ba:ba:ba:ba brd ff:ff:ff:ff:ff:ff permaddr 11:22:33:44:55:66
3: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 state DOWN group default qlen 1000
    link/ether ba:ba:ba:ba:ba:ba brd ff:ff:ff:ff:ff:ff permaddr 11:22:33:44:55:77
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether ba:ba:ba:ba:ba:ba brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:ba:ba:ba:ba:ba brd ff:ff:ff:ff:ff:ff
    inet6 redacted/64 scope link
       valid_lft forever preferred_lft forever
6: different VLAN's follow - redacted
As you can see the reported info show which NIC is down, linked to which bond (and to which vmbr) and its MAC (of the bond if customized and original of the NIC).
Having had a similar problem a few years back with a NIC in the wrong VLAN on a server with "too much" NIC's I now pay strong attention to which NIC-name in the OS correspond to which NIC at the back of the "box" and where it is plugged at the beginning of any non-trivial troobleshooting.

Have a nice day,
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!