Trouble Capturing Span Port Traffic via SR-IOV Passthrough in VM on Proxmox 8.1

zharfanug

New Member
Feb 7, 2024
1
0
1
Hello Proxmox Community,

I'm new to Proxmox and SR-IOV. So, I've been exploring the configuration options for SR-IOV passthrough and span port monitoring within my virtualized environment. Despite following the documentation and attempting various troubleshooting steps, I've encountered an issue where span port traffic isn't being properly captured within the virtual machine. I'm sure the switch the switch's span port configuration is correct because I can see the mirrored traffic with tcpdump -i from the proxmox shell, but I'm unable to capture anything beyond broadcast traffic (like LLDP) when using tcpdump -i within the VM.

Here's a summary of my current setup:
Proxmox info:
PVE version: pve-manager/8.1.4/ec5affc9e41f1d79 (kernel: 6.5.11-8-pve)​
Ethernet: (I only list related interface only)​
Interface Name: ens21f0​
PCI Name: Intel Corporation I350 Gigabit Network Connection (rev 01)​
PCI ID: 0000:b1:00.0​
Support SR-IOV: yes​
Configured(desired) VFs: 6​
Total VFs: 8​
Interface Name: ens21f0v0​
PCI Name: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)​
PCI ID: 0000:b1:10.0​
Mac: e2:4d:71:f9:34:84​
VM (Ubuntu) info:
OS: Ubuntu 22.04.3 LTS (kernel 5.15.0-92-generic)​
Ethernet: (I only list related interface only)​
Interface Name: ens16​
PCI Name: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)​
PCI ID (from within VM): 0000:00:10.0​
Mac: e2:4d:71:f9:34:84​
Span port configuration (from switch): direct span port with ethernet as destination (no bonds, specific ip and vlans)

What I've done / my configuration:
1. Check BIOS and make sure VT-d and SR-IOV is enabled (already enabled by default)
2. Enable intel_iommu=on iommu=pt, dmesg output:
DMAR: IOMMU enabled
DMAR: Intel(R) Virtualization Technology for Directed I/O
DMAR-IR: Enabled IRQ remapping in x2apic mode
igbvf: Intel(R) Gigabit Virtual Function Network Driver
igbvf 0000:b1:10.0: enabling device (0000 -> 0002)
igbvf 0000:b1:10.0: Assigning random MAC address.
igbvf 0000:b1:10.0: Intel(R) I350 Virtual Function
igbvf 0000:b1:10.0: Address: e2:4d:71:f9:34:84
3. Enable modules like vfio, lsmode | grep vfio output:
vfio_pci 16384 1
vfio_pci_core 86016 1 vfio_pci
irqbypass 12288 76 vfio_pci_core,kvm
vfio_iommu_type1 49152 1
vfio 57344 7 vfio_pci_core,vfio_iommu_type1,vfio_pci
iommufd 77824 1 vfio
(also reboot services/modules and proxmox itself to apply the changes)

4. Create virtual function (vf) interface, cat /sys/bus/pci/devices/0000:b1:00.0/sriov_numvfs output:
# cat /sys/bus/pci/devices/0000:b1:00.0/sriov_numvfs
6
5. Set interface promiscand trust on
# ip link set ens21f0 promisc on
# ip link set dev ens21f0 vf 0 trust on
# ip link show ens21f0
2: ens21f0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 9c:c2:c4:5c:da:cf brd ff:ff:ff:ff:ff:ff
vf 0 link/ether e2:4d:71:f9:34:84 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on​
vf 1 link/ether 6a:93:84:67:4f:8b brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on​
vf 2 link/ether 5a:05:2b:6c:fc:43 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on​
vf 3 link/ether 7e:33:d8:08:5a:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on​
vf 4 link/ether 5e:55:1f:05:cd:7f brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on​
vf 5 link/ether 7e:ba:dd:88:8c:ce brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on​
altname enp177s0f0​
6. Assign vf0 as Raw PCI device to Ubuntu VM
# cat /etc/pve/qemu-server/<vmid>.conf | grep -e cpu -e hostpci
cpu: host
hostpci0: 0000:b1:10.0
7. Turn on Ubuntu VM
8. On VM, check and make sure no firewall is enabled
9. On VM, add ens16 to netplan and do netplan apply
ens16:
dhcp4: no​
10. On VM, do tcpdump -i ens16 -nn, but only got broadcast traffic like LLDP
11. On VM, set promisc on
# ip link set ens16 promisc on
# ip link show ens16
2: ens16: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether e2:4d:71:f9:34:84 brd ff:ff:ff:ff:ff:ff​
altname enp0s16​
12. On VM, do another tcpdump -i ens16 -nn, but still only got broadcast traffic like LLDP
13. I also tried adding vlan 4095 and disable spoof check at ens21f0 vf 0
# ip link set dev ens21f0 vf 0 vlan 4095
# ip link set dev ens21f0 vf 0 spoof off
but still same result

Could someone kindly assist me in understanding if there are any additional steps or considerations I might have missed? Or maybe I did any unnecessary steps? Any insights or guidance would be greatly appreciated.


Also, here's the detailed hardware info (for flags & capabilities):
CPU info:
  • model name: Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz
  • flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
  • vmx flags: vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs pml ept_mode_based_exec tsc_scaling
Ethernet info lspci -vv : (omitted some info due to words limit)
b1:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
Subsystem: Inspur Electronic Information Industry Co., Ltd. 1G base-T QP EP014Ti1 Adapter​
Physical Slot: 21​
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+​
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-​
Latency: 0, Cache Line Size: 32 bytes​
Interrupt: pin A routed to IRQ 18​
NUMA node: 1​
IOMMU group: 8​
Capabilities: [40] Power Management version 3​
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)​
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-​
Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+​
Address: 0000000000000000 Data: 0000​
Masking: 00000000 Pending: 00000000​
Capabilities: [70] MSI-X: Enable+ Count=10 Masked-​
Vector table: BAR=3 offset=00000000​
PBA: BAR=3 offset=00002000​
Capabilities: [140 v1] Device Serial Number 9c-c2-c4-ff-ff-5c-da-cf​
Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)​
ARICap: MFVC- ACS-, Next Function: 1​
ARICtl: MFVC- ACS-, Function Group: 0​
Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
IOVCap: Migration- 10BitTagReq- Interrupt Message Number: 000​
IOVCtl: Enable+ Migration- Interrupt- MSE+ ARIHierarchy+ 10BitTagReq-​
IOVSta: Migration-​
Initial VFs: 8, Total VFs: 8, Number of VFs: 6, Function Dependency Link: 00​
VF offset: 128, stride: 4, Device ID: 1520​
Supported Page Size: 00000553, System Page Size: 00000001​
Region 0: Memory at 0000207fffee0000 (64-bit, prefetchable)​
Region 3: Memory at 0000207fffec0000 (64-bit, prefetchable)​
VF Migration: offset: 00000000, BIR: 0​
Capabilities: [1a0 v1] Transaction Processing Hints​
Device specific mode supported​
Steering table in TPH capability structure​
Capabilities: [1c0 v1] Latency Tolerance Reporting​
Max snoop latency: 0ns​
Max no snoop latency: 0ns​
Capabilities: [1d0 v1] Access Control Services​
ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-​
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-​
Kernel driver in use: igb​
Kernel modules: igb​

b1:10.0 Ethernet controller: Intel Corporation I350 Ethernet Controller Virtual Function (rev 01)
Subsystem: Inspur Electronic Information Industry Co., Ltd. I350 Ethernet Controller Virtual Function​
Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-​
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-​
Latency: 0​
NUMA node: 1​
IOMMU group: 193​
Region 0: Memory at 207fffee0000 (64-bit, prefetchable) [virtual] [size=16K]​
Region 3: Memory at 207fffec0000 (64-bit, prefetchable) [virtual] [size=16K]​
Capabilities: [70] MSI-X: Enable+ Count=3 Masked-​
Vector table: BAR=3 offset=00000000​
PBA: BAR=3 offset=00002000​
Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)​
ARICap: MFVC- ACS-, Next Function: 0​
ARICtl: MFVC- ACS-, Function Group: 0​
Capabilities: [1a0 v1] Transaction Processing Hints​
Device specific mode supported​
No steering table available​
Capabilities: [1d0 v1] Access Control Services​
ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-​
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-​
Kernel driver in use: vfio-pci​
Kernel modules: igbvf​
Please let me know if you need other info

Thanks
 
Last edited:
While I wasn't trying to mess with span port monitoring, I used a lot of this thread to troubleshoot a problem with passing in a VLAN assigned VF to a VM.

I found that setting the parameters of the interfaces with ip link on the command line after boot didn't work, and I needed to run them as a systemd service at start.

See this thread for my config that worked. https://forum.proxmox.com/threads/cant-reach-gateway-when-using-vlan.140618/
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!