SDN with multiple VLAN trunks

Mar 15, 2025
4
1
3
We are trying to replicate a vSphere design, whereby specific portgroups have been created with some vlans on each portgroup (ex. 2-4,9-15 or 101-200). With the proxmox SDN VNET option, we can only input 1 vlan id per vnet - how do we specify multiple vlan IDs in order to assign to the vms, so these can only have access to the specific vlans configured and no other.
 
Yes, that is just 1 vlan per vm, which is what we are doing. For the case where a vm requires a trunk with multiple vlans, how would we assign an sdn vnet for this case (ex. 20 VLANs, where it is not practical to assign 20 separate vnet nics to a single vm)?
 
For the case where a vm requires a trunk with multiple vlans, how would we assign an sdn vnet for this case (ex. 20 VLANs, where it is not practical to assign 20 separate vnet nics to a single VM)?
Ah. That makes sense.

That functionality is not currently available in the SDN.

We have implemented that by using a VLAN-aware bridge. However, that same bridge would need to exist on each PVE host in the cluster. The downside of this approach is that guests can tag the traffic however they want. You would need to limit the VLANs from the switch hardware not in the cluster.
 
I was looking around for this general functionality....

What I was hoping for was essentially an LACP bond on each physical host, with a bridge subscribed to 'all vm-scoped vlans' one can attach those vms which need multiple VLANs, **AND** singular vlan-scoped bridges which one could use as the upstream port for typical vms which only need to interact on one vlan ...
is this a thing?
 
What I was hoping for was essentially an LACP bond on each physical host, with a bridge subscribed to 'all vm-scoped vlans' one can attach those vms which need multiple VLANs, **AND** singular vlan-scoped bridges which one could use as the upstream port for typical vms which only need to interact on one vlan ...
is this a thing?
Yes.

If you have two physical interfaces (e.g., eno1 and eno2), you can bond them (e.g., bond1) and use LACP.

Then, you would assign the bond to a bridge (e.g. vmbr3) that was VLAN-aware.

The VMs that needed multiple VLANs would use vmbr3 as their VM's bridge.

For VMs that only need a single VLAN, you would create an SDN VLAN Zone that was backed by vmbr3. For each individual VLAN, you would create a Vnet with the appropriate VLAN tag.
 
yeah, Current config is 4x10g LACP bonded as bond0. 2x1g copper bonded as bond2.
pxm hosts' main interface is off
INI:
### lo interface
auto lo
iface lo inet loopback
### lo interface

#Ph1_Bond0.0 eno1
auto eno1
iface eno1 inet manual
  mtu 9198
  bond-master bond0
  pre-up ethtool -G eno1 rx 4078 tx 4078
  pre-up ethtool -C eno1 rx-usecs 1
  pre-up ethtool -C eno1 tx-usecs 1
  pre-up ip link set eno1 txqueuelen 13888
#Ph1_Bond0.0 eno1

#Ph2_Bond0.1 eno2
auto eno2
iface eno2 inet manual
  mtu 9198
  bond-master bond0
  pre-up ethtool -G eno2 rx 4078 tx 4078
  pre-up ethtool -C eno2 rx-usecs 1
  pre-up ethtool -C eno2 tx-usecs 1
  pre-up ip link set eno2 txqueuelen 13888
#Ph2_Bond0.1 eno2

#Ph3_Bond0.2 eno3
auto eno3
iface eno3 inet manual
  mtu 9198
  bond-master bond0
  pre-up ethtool -G eno3 rx 4078 tx 4078
  pre-up ethtool -C eno3 rx-usecs 1
  pre-up ethtool -C eno3 tx-usecs 1
  pre-up ip link set eno3 txqueuelen 13888
#Ph3_Bond0.2 eno3

#Ph4_Bond0.3 eno4
auto eno4
iface eno4 inet manual
  mtu 9198
  bond-master bond0
  pre-up ethtool -G eno4 rx 4078 tx 4078
  pre-up ethtool -C eno4 rx-usecs 1
  pre-up ethtool -C eno4 tx-usecs 1
  pre-up ip link set eno4 txqueuelen 13888
#Ph4_Bond0.3 eno4

######### Chelsio quad card
## 10g1# Chelsio SFP+ Port1
auto enp5s0f4
iface enp5s0f4 inet manual
  mtu 9198
  bond-master bond2
  pre-up ethtool -C enp5s0f4 rx-usecs 1
  pre-up ethtool -C enp5s0f4 tx-usecs 1
  pre-up ip link set enp5s0f4 txqueuelen 13888
  pre-up mii-tool -F 1000baseT-FD -A 1000baseT-FD -R enp5s0f4
## 10g1 # Chelsio SFP+ Port1
#########
## 10g2 # Chelsio SFP+ Port2
auto enp5s0f4d1
iface enp5s0f4d1 inet manual
  mtu 9198
  bond-master bond2
  pre-up ethtool -C enp5s0f4d1 rx-usecs 1
  pre-up ethtool -C enp5s0f4d1 tx-usecs 1
  pre-up ip link set enp5s0f4d1 txqueuelen 13888
  pre-up mii-tool -F 1000baseT-FD -A 1000baseT-FD -R enp5s0f4d1
## 10g2  # Chelsio SFP+ Port2
#######
## 1g3 # Chelsio Copper port3
auto enp5s0f4d2
iface enp5s0f4d2 inet manual
  mtu 9198
  bond-master bond2
  pre-up ethtool -C enp5s0f4d2 rx-usecs 1
  pre-up ethtool -C enp5s0f4d2 tx-usecs 1
  pre-up ip link set enp5s0f4d2 txqueuelen 13888
  pre-up mii-tool -F 1000baseT-FD -A 1000baseT-FD -R enp5s0f4d2
## 1g3 # Chelsio Copper port3

#######
## 1g4 # Chelsio Copper port4
auto enp5s0f4d3
iface enp5s0f4d3 inet manual
  mtu 9198
  bond-master bond2
  pre-up ethtool -C enp5s0f4d3 rx-usecs 1
  pre-up ethtool -C enp5s0f4d3 tx-usecs 1
  pre-up ip link set enp5s0f4d3 txqueuelen 13888
  pre-up mii-tool -F 1000baseT-FD -A 1000baseT-FD -R enp5s0f4d3
## 1g4 # Chelsio Copper port4
#PH6_Bond2.3 enp129s0f3

# dummy nic for node-scoped intra-node-comms for VMs
auto foonic
iface foonic inet manual
    ovs_type OVSIntPort
    ovs_bridge vmbr2
    ovs_mtu 9198
    pre-up ip link set foonic txqueuelen 13888

########################################################
#  BONDS
## BOND0 - All node Data now that we disabled dcb
## Bond2 - alt pxm traffic. wired to dif circuit/switch
auto bond0
iface bond0 inet manual
  bond-slaves eno1 eno2 eno3 eno4
  bond-miimon 100
  bond-mode 802.3ad
  bond-xmit-hash-policy layer2+3
  mtu 9198
  pre-up ip link set bond0 txqueuelen 13888
  post-up ip link set eno1 mtu 9198
  post-up ip link set eno2 mtu 9198
  post-up ip link set eno3 mtu 9198
  post-up ip link set eno4 mtu 9198
## BOND0 - Data
############################################
## BOND2 - Admin - PXM Alt net resilience -#
auto bond2
iface bond2 inet manual
    bond-slaves enp5s0f4 enp5s0f4d1 enp5s0f4d2 enp5s0f4d3
    bond-miimon 1000
    bond-mode balance-xor
    bond-xmit-hash-policy layer2+3
    mtu 9198
    pre-up ip link set bond2 txqueuelen 13888
  post-up ip link set enp5s0f4 mtu 9198
  post-up ip link set enp5s0f4d1 mtu 9198
  post-up ip link set enp5s0f4d2 mtu 9198
  post-up ip link set enp5s0f4d3 mtu 9198
## BOND2 - Admin_Interfaces -----------------#
#============================================#
# Bond0 Node Vlan #################### VLANS #
# VLAN interfaces on bond0 for the pxm host  #
#_____________________________ VLAN 10 ___PXM_
auto bond0.10
iface bond0.10 inet manual
    address 10.10.10.40/24
    gateway 10.10.10.1
  broadcast 10.10.10.255
    mtu 9180
  vlan-id         10
  vlan-raw-device bond0
    pre-up ip link set bond0.10 txqueuelen 13888
    post-up ip route add 10.10.10.0/24 dev bond0.10 src 10.10.10.40 table 1010
    post-up ip route add 127.0.0.0/8 dev lo table 1010
    post-up ip route add default via 10.10.10.1 table 1010
    post-up ip rule add from 10.10.10.40 table 1010
#Main_Interface

#________________________ VLAN 198 ___STORAGE_
auto bond0.198
iface bond0.198 inet manual
    address 10.10.198.40/24
  broadcast 10.10.198.255
    mtu 9180
  vlan-id         198
  vlan-raw-device bond0
    pre-up ip link set bond0.198 txqueuelen 13888
    post-up ip route add 10.10.198.0/24 dev bond0.198 src 10.10.198.40 table 1198
    post-up ip route add 127.0.0.0/8 dev lo table 1198
    post-up ip rule add from 10.10.198.40 table 1198
#--------------------------- VLAN 198_STORAGE_
##############################################
#___________________________ VLAN 199____CEPH_
auto bond0.199
iface bond0.199 inet manual
    address 10.10.199.40/24
  broadcast 10.10.199.255
    mtu 9180
  vlan-id         199
  vlan-raw-device bond0
    pre-up ip link set bond0.199 txqueuelen 13888
    post-up ip route add 10.10.199.0/24 dev bond0.199 src 10.10.199.40 table 1199
    post-up ip route add 127.0.0.0/8 dev lo table 1199
    post-up ip rule add from 10.10.199.40 table 1199
#----------------------------VLAN 199____CEPH_
#___________________________ VLAN 43 _PXM_ALT_
auto bond2.43
iface bond2.43 inet static
  address 10.10.43.40/24
  broadcast 10.10.43.255
  mtu 9180
  vlan-id 43
  vlan-raw-device bond2
  pre-up ip link set bond2.43 txqueuelen 13888
  post-up ip route add 10.10.43.0/24 dev bond2.43 src 10.10.43.40 table 1043
  post-up ip route add 127.0.0.0/8 dev lo table 1043
  post-up ip rule add from 10.10.43.40 table 1043
# Secondary_Admin_Net ______ VLAN 43 _PXM_ALT_
#============================================#
###################################### VMBR0 #
#= All VM Traffic ===========================#
auto vmbr0
iface vmbr0 inet manual
  bridge-ports bond0
  bridge-stp off
  bridge-fd 0
  bridge-vlan-aware yes
  bridge-vids 2-9,11-197,443
  bridge-disable-mac-learning 0
  mtu 9180
  pre-up ip link set vmbr0 txqueuelen 13888
# All_VM_Traffic =============All VM Traffic #
###################################### VMBR0 #
# Dummy Net ########################## VMBR2 #
auto vmbr2
iface vmbr2 inet manual
    ovs_type OVSBridge
    ovs_ports foonic
    ovs_mtu 9180
    pre-up ip link set vmbr2 txqueuelen 13888
# Dummy Net ########################## VMBR2 #

# iDrac Softnic ###################### idrac #
auto idrac
iface idrac inet manual
    address 169.254.0.40/32
    mtu 1500
    post-up ip route add 169.254.0.40/32 dev idrac src 169.254.0.40 table 1101
    post-up ip route add 127.0.0.0/8 dev lo table 1101
    post-up ip rule add from 169.254.0.40 table 1101
# iDrac Softnic ###################### idrac #

#Permit Proxmox SDN to work
source /etc/network/interfaces.d/*
#Permit Proxmox SDN to work#
 
You could replace this code:

Code:
# dummy nic for node-scoped intra-node-comms for VMs
auto foonic
iface foonic inet manual
    ovs_type OVSIntPort
    ovs_bridge vmbr2
    ovs_mtu 9198
    pre-up ip link set foonic txqueuelen 13888


auto vmbr2
iface vmbr2 inet manual
    ovs_type OVSBridge
    ovs_ports foonic
    ovs_mtu 9180
    pre-up ip link set vmbr2 txqueuelen 13888

with this code:

Code:
auto vmbr2
iface vmbr2 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

You would not need to install OVS (it appears this is the only place where you are using OVS).
 
I was under the impression that one needed OVS to support the SDN component (Which I'd not leveraged yet, but had wanted to... )
was to allow services which cohabitated a node to have a 'private' node-scoped network that wouldn't share traffic

I initially wanted to have
- node-scoped interfaces (interfaces for vms that resided on the same host and needed to communicate with each other, but nothing else )
- cluster-scoped internal interfaces (cross node network for services/workloads that never egress)
- single-vlan-interfaces (for vms/lxc's which only have traffic on one interface )

but couldn't seem to get the right config working to be able to have single-vlan-vmbr interfaces and a catchall multivlan ... but this was initially set up before the SDN components were released and I've not **REALLY** revisited the config since ....
 
I was under the impression that one needed OVS to support the SDN component
You do not need OVS for PVE SDN.

node-scoped interfaces (interfaces for vms that resided on the same host and needed to communicate with each other, but nothing else )
You are describing SDN Simple Zones. Each Vnet is an isolated network that only connects VMs.

cluster-scoped internal interfaces (cross node network for services/workloads that never egress)

You would use either SDN VLAN Zones or SDN EVPN/VXLAN Zones. If you use the VLAN Zone, you must ensure the switches are configured for the VLANS. If you use EVPN/VXLAN Zones, you will not need to change the switch configuration.

single-vlan-interfaces (for vms/lxc's which only have traffic on one interface )

SDN VLAN Zone will do this for you.
 
I was under the impression that one needed OVS to support the SDN component
You do not need OVS for PVE SDN.

I think I needed it when I first set stuff up... this would be <7ish? it wasn't until 8 that SDN was taken out of experimental right?


I guess I'll have to play with it a bit more now that it's a real thing.


Curious, what have you found to be the upside of pvesdn now that it's a bit more mature?

I don't have a strong attachment to ovs really... but curious to hear what's compelling about switching...

(legitimately interested, and excited to hear from someone with a strong opinion about it ... :) )
ALSO Curious (... since I have yer eyeballs an some of your brainspace for a spell .... )
is it usual for the vlan bridge to not have any tx traffic?

In my env, the physical hosts are multi-homed, and so I use source-based routing to help ensure that asymmetric routing is less likely to happen...

Recently noticed that the vlan bridge inteface doiesn't have any egress traffic registered in netstat ... haven't REALLY dug in to it, as things are working as anticipated.... but it seemed a 'huh.... that's odd..... I think? ..... or is it? .... and never really thought about it much futher....


:) Thanks in advance!
W




code_language.shell:
root@px-m-40:~# netstat -ni
Kernel Interface table
Iface             MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0            9198 3881569098      0 183625 0      3960523431      0      1      0 BMmRU
bond2            9198  3718862      0  92095 0       3688412      0      0      0 BMmRU
bond0.10         9180 115693687      0      0 0      116428140      0      0      0 BMRU
bond0.198        9180 904701915      0      0 0      807976265      0      0      0 BMRU
bond0.199        9180 2333797055      0      0 0      2182448071      0      0      0 BMRU
bond2.43         9180  3626729      0      0 0       3626870      0    125      0 BMRU
eno1             9198 992867244      0      1 0      952178806      0      0      0 BMsRU
eno2             9198 1023839736      0      3 0      1201738293      0      0      0 BMsRU
eno3             9198 947096579      0      2 0      792210546      0      0      0 BMsRU
eno4             9198 917765539      0      2 0      1014395786      0      0      0 BMsRU
enp5s0f4         9198        0      0      0 0             0      0      0      0 BMsU
enp5s0f4d1       9198        0      0      0 0             0      0      0      0 BMsU
enp5s0f4d2       9198  2142286      0      0 0       1869887      0      0      0 BMsRU
enp5s0f4d3       9198  1576576      0      0 0       1818525      0      0      0 BMsRU
foonic           9198        0      0      0 0         15346      0      0      0 BMRU
fwbr105i0        9180   546404      0      0 0             0      0      0      0 BMRU
fwln105i0        9180  6078831      0  12006 0       7736305      0      0      0 BMRU
fwpr105p0        9180  7736305      0  11979 0       6078831      0      0      0 BMRU
idrac            1500        0      0      0 0         15491      0      0      0 BMRU
lo              65536 28833076      0      0 0      28833076      0      0      0 LRU
tap105i0         9180  7724326      0      0 0       6078804      0      0      0 BMPRU
tap3033i0        9180 14813341      0      0 0      17038381      0      0      0 BMPRU
tap402i0         9180  2567434      0      0 0       3781075      0      0      0 BMPRU
tap4187i0        9180    44789      0      0 0        873236      0      0      0 BMPRU
veth4186i0       9180    43516      0      0 0        665628      0      0      0 BMRU
vmbr0            9180 22706676      0      0 0             0      0      0      0 BMRU
 
we are using openvswitch, with 2 bonded physical interfaces and working perfectly with SDN, with this limitation of a single vlan per vnet. In order to assign a trunk to the vm, we configured the vm nic to map directly to the bridge, but we do need this functionality on the SDN, as it then becomes much easier/uniform to manage on the cluster. we are looking to do something as below:
sdn-vnet-tags.png
 
@wolfspyre

Curious, what have you found to be the upside of pvesdn now that it's a bit more mature?

We always used Open vSwitch (OVS) and encouraged our clients to do the same. It was easier to do some things.

However, since PVE SDN was released, we have discouraged using OVS and strongly recommend SDN.
  • Configuring OVS is done on a per-host basis where SDN is cluster-wide, which reduces complexity (and work).
  • SDN is integrated with the PVE permissions system. You can control which networks users have access to. Linux bridges and OVS do work well with the PVE permission system.
  • Proxmox is spending significant time adding functionality to the SDN. For example, they are integrating the PVE firewall, and the Proxmox Datacenter Manager is expected to integrate with the SDN.
  • SDN has additional functionality that is not available for OVS or Linux bridges, such as DHCP, DNS, and IPAM integration.
  • While you could deploy VXLAN/EVPN without SDN, it is very easy to do so with SDN.
I don't have a strong attachment to ovs really... but curious to hear what's compelling about switching

OVS is an add-on. Unless you need something that OVS provides that the default Linux bridging or SDN does not provide, then why do the extra work to add it and change over the configuration file?

Recently noticed that the vlan bridge inteface doiesn't have any egress traffic registered in netstat ... haven't REALLY dug in to it, as things are working as anticipated.... but it seemed a 'huh.... that's odd..... I think? ..... or is it? .... and never really thought about it much further

That does seem unusual. I am not sure why it might happen. If you have nothing on vmbr0 then there wouldn't be traffic initiated from vmbr0 and would not be any return traffic. You would just get the broadcast traffic.
 
@intelliimpulse@intelliimp

we are looking to do something as below

You are not the first. :)

https://bugzilla.proxmox.com/show_bug.cgi?id=5443
https://bugzilla.proxmox.com/show_bug.cgi?id=6272

I recommend you comment on the bug (aka feature request) that matches what you seek. This will signal to Proxmox's development team that there is interest.
 
currently trunks are only available on vmbrX directly, and filtering of allowed vlan is done in vm configuration directly (net: ....,trunks=4-10;56;100)

the sdn usage is currectly really 1vlan = 1 vnet. (because they are also ipam,dhcp,subnets,.... where it can't work with multiple networks by vnet)

but yes, maybe sdn could be extended. (multiple tags in a vnet = trunks, and filter the allowed vlans)
 
  • Like
Reactions: weehooey-bh
@intelliimpulse@intelliimp



You are not the first. :)

https://bugzilla.proxmox.com/show_bug.cgi?id=5443
https://bugzilla.proxmox.com/show_bug.cgi?id=6272

I recommend you comment on the bug (aka feature request) that matches what you seek. This will signal to Proxmox's development team that there is interest.
seems we had not searched for the correct keywords - thanks for the link - it is exactly the feature we need to have implemented, in order to minimize the differences in functionality that vSphere currently provides.
 
  • Like
Reactions: weehooey-bh