Proxmox and VLAN Trunking

spetrillo

Member
Feb 15, 2024
30
0
6
Ok so this might be a little confusing to all, so I will try to explain...

I have a Lenovo m720q Tiny with an Intel I350-T4. I have been able to reconfigure the I350, so it now supports SR-IOV. According to the Intel datasheet I can have 8 virtual interfaces for each physical interface, for a total of 32 virtual interfaces. Thats alot of virtual interfaces!

Originally I had broken up my four physical interfaces into 2 lags, with each lag having two ports. This allowed me to break up my vlans. Now I am thinking I might want to create one lag of 4 interfaces, and trunk all vlans across the lag. I believe this would mean all 32 virtual interfaces would have access to all the vlans in the trunk? If that is the case then I have the ultimate flexibility in building vms.

My initial vms will be an OPNsense firewall running SR-IOV on the 4 interfaces. After that I will have Pi-Hole, Unifi Controller, Zabbix, and Plex vms....and plenty of room to build more! Getting the I350 reconfigured to support SR-IOV is the big move.
 
Sorry if I was not clear...in creating the vf interfaces off the I350 will the vf interfaces have access to the same vlans as the physical interfaces, or do I have the ability to customize vlan access?

Right now the 4 physical interfaces are setup in a LACP lag, and vlans 1, 10, 11, 12, 20, and 30 are trunked across those 4 interfaces. Will the vf interfaces associated with the physical interfaces have access to the same vlans?
 
The virtual function is sharing the resources of the NIC. VLANs are associated on top if it on a bridge or vlan-raw-device. In short, yes that should work.

Though it sounds like a rather complicated network setup to me. I assume the OPNsense will consume the physical interface(s) and provide private networks to the other VMs. Which could all live on virtual interfaces on a separate bridge. No physical interface or VF needed. But I imagine you want to play with SR-IOV. :)
 
I am starting to believe SR-IOV is complicated. It looks like I got SR-IOV working bc I have vf interfaces attached to my physical interfaces, but I am a bit confused about what to passthrough. I can passthrough individual NIC ports via raw devices but not mapped devices, so I am not sure if I truly have it working.

What I wanted to do is pass the physical NICs to OPNsense and then use virtual interfaces for the rest of my VMs.

Ohh and passing the raw PCI NIC ports just hung my Proxmox server...so not sure this works at all.
 
Last edited:
Ohh and passing the raw PCI NIC ports just hung my Proxmox server...so not sure this works at all.
You need to have the NIC on its own IOMMU group, as only whole groups can be passed through. But why not just use a bridge without an IP and connect it to the NIC? This way it as a very simple setup and you have the OPNsense as firewall. The other VMs are connected to a second bridge, as well as the OPNsense.
 
My NICs, both the physical interfaces and the vf interfaces are in their own IOMMU group. I believe thats the whole rationale around SR-IOV, but then again I am not network engineer expert. At least I thought that was the rationale.
 
Straight out of my Proxmox server:

IOMMU Group 0 00:02.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e92]
IOMMU Group 1 00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers [8086:3ec2] (rev 07)
IOMMU Group 2 00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
IOMMU Group 3 00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
IOMMU Group 4 00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
IOMMU Group 4 00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
IOMMU Group 5 00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
IOMMU Group 6 00:17.0 SATA controller [0106]: Intel Corporation Cannon Lake PCH SATA AHCI Controller [8086:a352] (rev 10)
IOMMU Group 7 00:1b.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #21 [8086:a32c] (rev f0)
IOMMU Group 8 00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:a308] (rev 10)
IOMMU Group 8 00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
IOMMU Group 8 00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
IOMMU Group 8 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
IOMMU Group 8 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-V [8086:15bc] (rev 10)
IOMMU Group 9 01:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 10 01:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 11 01:00.2 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 12 01:00.3 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 13 03:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller 980 [144d:a809]

It looks like they are in separate groups. Group 9-12 is the I350 4 port PCI card.
 
Doesn't look like there is a VF listed or in a separate group. In that case you can only pass through the whole port. Since you have a second NIC, my previous suggestion is simpler and will work.
 
Sorry forgot to run my script to enable VF:

IOMMU Group 0:
00:02.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e92]
IOMMU Group 1:
00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers [8086:3ec2] (rev 07)
IOMMU Group 2:
00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
IOMMU Group 3:
00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
IOMMU Group 4:
00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
IOMMU Group 5:
00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
IOMMU Group 6:
00:17.0 SATA controller [0106]: Intel Corporation Cannon Lake PCH SATA AHCI Controller [8086:a352] (rev 10)
IOMMU Group 7:
00:1b.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #21 [8086:a32c] (rev f0)
IOMMU Group 8:
00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:a308] (rev 10)
00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-V [8086:15bc] (rev 10)
IOMMU Group 9:
01:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 10:
01:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 11:
01:00.2 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 12:
01:00.3 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU Group 13:
03:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller 980 [144d:a809]
IOMMU Group 14:
02:10.0 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 15:
02:10.4 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 16:
02:11.0 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 17:
02:11.4 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 18:
02:10.1 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 19:
02:10.5 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 20:
02:11.1 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 21:
02:11.5 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 22:
02:10.2 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 23:
02:10.6 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 24:
02:11.2 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 25:
02:11.6 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 26:
02:10.3 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 27:
02:10.7 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 28:
02:11.3 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)
IOMMU Group 29:
02:11.7 Ethernet controller [0200]: Intel Corporation I350 Ethernet Controller Virtual Function [8086:1520] (rev 01)

With that said do I assign the VF to OPNsense and the physical interface stays for regular VMs?
 
You want to lock away the VMs, so the don't have direct network access. Then just pass through the NIC to the OPNsense and use a virtual bridge to connect to all the VMs. This is simple and less error prone then SR-IOV.
 
I appreciate you staying with me bc I am very confused by all this. I thought passthrough and SR-IOV are one and the same, so maybe this is where I am getting all caught up. Here is my interfaces configuration:

auto lo
iface lo inet loopback
auto eno1
iface eno1 inet dhcp
auto enp1s0f0
iface enp1s0f0 inet manual
auto enp1s0f1
iface enp1s0f1 inet manual
auto enp1s0f2
iface enp1s0f2 inet manual
auto enp1s0f3
iface enp1s0f3 inet manual
iface enp1s0f0v0 inet manual
iface enp1s0f0v1 inet manual
iface enp1s0f0v2 inet manual
iface enp1s0f0v3 inet manual
iface enp1s0f1v0 inet manual
iface enp1s0f1v1 inet manual
iface enp1s0f1v2 inet manual
iface enp1s0f1v3 inet manual
iface enp1s0f2v0 inet manual
iface enp1s0f2v1 inet manual
iface enp1s0f2v2 inet manual
iface enp1s0f2v3 inet manual
iface enp1s0f3v0 inet manual
iface enp1s0f3v1 inet manual
iface enp1s0f3v2 inet manual
iface enp1s0f3v3 inet manual
auto bond0
iface bond0 inet manual
bond-slaves enp1s0f0 enp1s0f1 enp1s0f2 enp1s0f3
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 1,10-12,20,30
auto vmbr0.1
iface vmbr0.1 inet dhcp


I have created a bond from the 4 physcial interfaces of the I350 and setup all my needed vlans to be available over that bond. VMBR0.1 is my management IP. If I am hearing you correctly I do not need the VFs at all. I simply passthrough the physical interfaces of the I350(enp1s0f0, enp1s0f1, enp1s0f2, enp1s0f3) and use VMBR0 to handle all other vms. Am I finally getting it??

Steve
 
Yes. Passing through the physical interfaces lets OPNsense handle them directly, no bridge or other emulation needed. The only thing, you'll need to either use a separate NIC port for accessing the PVE node or live with the fact that you can't access it when the OPNsense is down.
Code:
                               ┌─────┐ ┌─────┐ ┌─────┐                      
                               │ VM3 │ │ VM1 │ │ VM2 │                      
                               └──┬──┘ └──┬──┘ └──┬──┘                      
                                  │       │       │                        
                                  │       │       │ virtual NIC (net0)      
                                  ├───────┴───────┴┐                        
                                  │                │                        
                                  │     vmbr0      │                        
                                  │                │                        
                                  └────────┬───────┘                        
                                           │virtual NIC (net0)              
                                    ┌──────┴──────┐                        
                                    │             │                        
                                    │             │                        
                                    │   OPNsense  │                        
                                    │             │                        
                                    │ ┌─────────┐ │   ▲                    
                                    └─┼─┼─┼┼─┼┼─┼─┘   │                    
                                 NICs │ │ ││ ││ │     │Passthrough          
                                      ├─┼─┼┼─┼┼─┤     │                    
            ┌─────────────────────────┴─────────┴─────┴──────────────────┐  
            │                                                            │  
            │                                                            │  
            │                                                            │  
            │                                                            │  
            │                  Proxmox (physical machine)                │  
            │                                                            │  
            │                                                            │  
            │                                                            │  
            └────────────────────────────────────────────────────────────┘

EDIT: Don't get me wrong, SR-IOV is a cool feature but probably not what you're looking for.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!