Split guest VM from host using VLAN

gon

New Member
Apr 9, 2025
7
0
1
Hello,

I'd like to put my guest VMs in a different subnet from my proxmox host.

I have the following setup:

Single ethernet cable / vmbr0
192.168.4.1 ---------------------------------------------- 192.168.4.2
(Dedicated openWRT router) Proxmox host

I've put the vmbr0 as vlan aware, tagged my VM1 and VM2 traffic.
192.168.10.1 <----------VLAN 1------------------------ Guest VM 1
192.168.20.1 <----------VLAN 2------------------------ Guest VM 2

On the openWRT side I've configured a VLAN attached to the port of the proxmox host.
I can see some traffic going through the VLAN interface, but I cannot get an ip with DHCP on the guest VMs.

Is this the correct approach to split guest VMs in different subnet than the proxmox host?
I want to manage as much as possible the network stuff in openwrt directly.

Do you have a guide to recommend or any documentation to understand a bit more how I can achieve something like this.

Thanks for your help!
 
Hello!

There are a few ways to configure VLANs in PVE. Using the SDN is a nice option if your up for it, making individual Linux VLAN interfaces (vmbr0.100 for example) is another option. In the end, making the bridge VLAN aware and adding the tag on a per VM basis will work just fine with openwrt as well.

What VLAN tags did you make for your two VM subnets? Did you add the VLAN tag to the nic of the VM? If you give the VM a static IP address can you ping the openwrt address?

Cheers!
 
  • Like
Reactions: weehooey-bh
The easiest way in my opinion (especially since you have already configured your router) is to adjust your /etc/network/interfaces file as follows:
auto lo
iface lo inet loopback
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
bridge-ports en01
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4092

auto vmbr0.4
iface vmbr0.4 inet static
address 192.168.4.2
gateway 192.168.4.1

If you connect your Proxmox server to a trunk port on your managed switch or your WRT device, then all your VLANs will be accessible on vmbr0, and you simply select which VLAN to attach your VM to in the network settings

Screenshot 2025-04-09 172801.jpg

Meanwhile your Proxmox management console will be on its own VLAN (I labeled it "4" but you can call it whatever you want except 0,1, 4093, or 4094

Also you only need one gateway, it will work for all the VLANs.
 
Last edited:
Hello,

Thanks for your answers.

So I went for the solution of making the bridge VLAN aware and tagging the traffic at the VM level.
I have configured my proxmox host following:

1744322715049.png

On the OpenWRT side I have something like:
Code:
config device
    option name 'eth4'

config device
    option type 'bridge'
    option name 'VLANS'
    list ports 'eth4'
    list ports 'VLANS.42'

config bridge-vlan
    option device 'VLANS'
    option vlan '42'
    list ports 'eth4:t'
    list ports 'VLANS.42:t'

config interface 'Proxmox'
    option proto 'static'
    option device 'VLANS.42'
    option ipaddr '192.168.42.1'
    option netmask '255.255.255.0'

For the moment I'm unable to ping the gateway from inside the proxmox host.
Any idea?

(Proxmox works nicely with eth4 and a classic subnet)

Thanks for your help!
 
You have two interfaces and only one is activated. What is the "enxbe3af2b6059f" interface? Are you potentially plugging into the wrong ethernet port?
 
You mean that I have to setup an interface for eth4? Does this interface has to be configured with it's own address/gateway?

The enxbe is a virtual network card, it's a feature linked with the BMC.

The ethernet port is fine, I had the network working with a classic subnet and the same physical port configuration.

Thanks for your help!
 
No, you shouldn't need any more IP addresses or gateways. I have no OpenWRT experience, so I am not sure how to help you with that. What devices if any are between your OpenWRT device and your Proxmox node? Are you going through a switch of some kind? It sounds like you may be suffering from an incorrectly configured switch port (tagged vs untagged). However you are connecting to Proxmox, that port on the switch or router has to be configured as a trunked port/tagged port, so that the port can carry multiple VLANs. Otherwise, if you are using a regular old untagged/access port, or worse yet, an unmanaged switch, then the VLANs can't be passed on.
 
So the server and the router are linked together with a fiber AOC.
On each side there is a Mellanox card. So on the openWRT my idea was to bridge the physical port with a VLAN device and enable VLAN filtering on it, as shown in the openwrt screenshot. I've tried to tag the traffic going through the VLAN and the physical port.

I might hit this issue: https://www.apalrd.net/posts/2023/tip_mellanox/

I've seen in the log this kind of messages "[ 32.732509] mlx5_core 0000:19:00.1: mlx5e_vport_context_update_vlans:179:(pid 13470): netdev vlans list size (4080) &gt; (512) max vport list size, some vlans will be dropped"
 
Hello again.

From the looks of things your suspicion that this is to do with the Mellanox cards seems pretty solid. Did you try the workarounds suggested by apalrd on your PVE host?
 
Hello,

I'll try the workaround a bit later today, I'll keep this post updated. I suppose there are kind of downside when using promiscuous mode, in term of performance?
 
If it resolves the issue we can at least rule out your VLAN configuration on either the OpenWRT or PVE side. Let me know.
 
Setting promiscuous mode fixes the issue! I will try to find an other way to fix this, instead of promiscuous mode.
 
Glad it's working, that clears you in terms of your PVE and OpenWRT configurations being the culprit. Not sure of the exact implications of promiscuous mode, but from a quick google search, it does seem like it's best to have it disabled. Please drop the resolution here if you find one. Happy hunting!
 
I've found a solution here: https://forum.proxmox.com/threads/m...nd-brigde-vlan-aware-on-proxmox-8-0-1.130902/

Using the proprietary mlnx_en driver version 24.10-0.7.0.0 works with Proxmox on top of Debian.
I've tried with the latest version of this driver and had no success, but this specific version fixed the issue.

If anyone had success with VLAN configuration on a Mellanox ConnectX-4 LX MCX4121A-ACAT with a different driver, please let me know!

I've seen the new Doca drivers from nvidia, but I was not really conviced.