Vlan setup issue

mihaib

New Member
Oct 10, 2024
15
0
1
Amsterdam
Hi all,

I have troubles understanding how this setup should be done:
eno1 + eno2 -> bond0 (lacp) -> bridge0
vlan1 : 172.16.10.0/24
vlan10 : 172.16.20.0/29
Now I need this bridge to be able to talk to vlan10 and non vlan clients (or vlan1). 2 containers need to have 1 virtual nic connected to lan and 1 virtual nic connected to the vlan10 network.
I connected the bond0 to both vlans but there is no traffic on vlan10

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
#lacp

auto vmbr0
iface vmbr0 inet static
address 172.16.10.6/24
gateway 172.16.10.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#bridge

auto vlan10
iface vlan10 inet manual
vlan-raw-device bond0
#wireshark

auto vlan1
iface vlan1 inet manual
vlan-raw-device bond0

Problem: every container in Vlan1 works as expected. containers in vlan10 do not.
any suggestion?

thank you
 
Last edited:
So, for a Hypervisor you need all the VLANs you're going to use trunked in (and that's a question for your network admin). Once the VLANs are available on your NICs then you can setup your bond (I see the LACP setup enabled), then you want two separate bridge for each VLAN.

So, eno1 + eno2 = bond0
bond0 is parent to your bond0.1610 (made up VLAN id) for 172.16.10.0/24 (assign no ip here)
bond0 is also parent to bond0.1620 (the 172.16.20.0/24 (assign no ip here)
bond0.1610 is port for vrbr0 (here is where 172.16.10.6/24 is assigned and default router)
bond0.1620 is port for vrbr1 (NO IP needed here)

It all hinges on how you have the VLANs piped into the connections, and I'm assuming that since you're doing LACP (802.3ad) your network team has the ability to trunk in (not be in access mode for single server) VLANS.

The piece I see missing are the bond0.$VLANID devices.

example from our setup using your numbers (of course I don't know your vlanids so you'll have to fix that part):
auto lo
iface lo inet loopback

auto eno5
iface eno5 inet manual

auto eno6
iface eno6 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno5 eno6
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
bond-lacp-rate 1

auto bond0.1610
iface bond0.1610 inet manual

auto bond0.1620
iface bond.1620 inet manual

auto vmbr0
iface vmbr0 inet static
address 172.16.10.6/24
gateway 172.16.10.1
bridge-ports bond0.1610
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond0.1610
bridge-stp off
bridge-fd 0
bridge-vlan-aware yest
bridge-vids 2-4094

source /etc/network/interfaces.d/*
 
  • Like
Reactions: mihaib
it works from the proxmox console to ping the vlan10 interface router (172.16.20.1/28)
now the next step I need to figure out is how i make the route on the container.
the containers has 2 interfaces:
1 - bond0 - 172.16.10.0/24 - works well
2 - bond0.10 - 172.16.20.3/28 - not able to reach 172.16.20.1 on the router.
I think this is because there is no route
Code:
auto lo

iface lo inet loopback



auto eno1

iface eno1 inet manual



auto eno2

iface eno2 inet manual



auto bond0

iface bond0 inet manual

    bond-slaves eno1 eno2

    bond-miimon 100

    bond-mode 802.3ad

        bond-xmit-hash-policy layer2+3

        bond-lacp-rate 1



#bond vlan10

auto bond0.10

iface bond0.10 inet manual



auto vmbr0

iface vmbr0 inet static

    address 172.16.10.6/24

    gateway 172.16.10.1

    bridge-ports bond0

    bridge-stp off

    bridge-fd 0

    bridge-vlan-aware yes

    bridge-vids 2-4094



auto vmbr1

iface vmbr1 inet manual

    bridge-ports bond0.10

    bridge-stp off

    bridge-fd 0

    bridge-vlan-aware yest

    bridge-vids 2-4094



[CODE]



root@Bespin:~# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

0.0.0.0 172.16.10.1 0.0.0.0 UG 0 0 0 vmbr0

172.16.10.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0[/CODE]
 
Last edited:
Your VMs need to have their networks assigned to the switches, not the VLAN interfaces.
Then the networking inside the container/VM is the same as a regular (physical) box. you assign an IP and set mask and default route.
 
Your VMs need to have their networks assigned to the switches, not the VLAN interfaces.
Then the networking inside the container/VM is the same as a regular (physical) box. you assign an IP and set mask and default route.
fair. I have 2 IPs : vmbr0 - 172.16.10.6/24 and vmbr1 - 172.16.20.2/28
Now i assigned a gateway only to vmbr0. this makes me get ping into 172.16.20.1; however, I believe it is because it goes via 172.16.10.1
on the vm's using vlan10 i have no ping into 172.16.20.1

I cannot find a way to route that traffic :oops:
 
Your Hypervisor shouldn't route the traffic, let the router (default) do that. The bridges don't route anyway.

Since I don't do containers on ProxMox I can only speak from VM machine perspective.
So, from ProxMox point of view, your VM's should have a network device that looks like this:
Network Device (net0) virtio=<MAC address>,bridge=vmbr1
Then inside the guest VM you setup networking as you normally would a stand alone box:

eth0 (or whatever the connection is named in the OS):
IPADDR: 172.16.20.15
MASK: 255.255.255.0
GATEWAY: 172.16.20.1

(depending on the Distro or OS it may just use the CIDR statement of: 172.16.20.15/24)
From what I can tell the containers would have the network setup when you create the container, but it won't let me see that screen in the create menu because I don't have the pieces for template, etc. to get there.

You don't need the Hypervisor to use the 2nd network. If you do, you set a manual route for the vmbr1 device.
ip route add command would come into play, and I'm too lazy at the moment to hunt that up.

So, test the network from within the client VM/container, not the Hypervisor. (i.e. the vmbr1 device shouldn't need an IP address)

IF the VM cannot access the network, it could be that the VLANS are not trunked in properly for a Hypervisor.
 
Last edited:
  • Like
Reactions: mihaib

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!