SR-IOV passthrough or Vmbr for traffic between VMs?

nva

New Member
Oct 16, 2023
7
0
1
I'm planning for a new server and wondering whether i should opt for a NIC capable of SR-IOV.

AFAIK, SR-IOV has obvious advantage of not causing CPU overhead if i give each VM a VF, but VM traffic still need to go all the way up to switch before coming back to same physical NIC.

For virtual bridge, all traffic are internal within physical host so that might be better for stability? Correct me if i'm wrong

Latency is deciding factor for me and i don't know what approach give me better latency. I will be using iSCSI storage VM and a couple of other VMs.

Thanks for help!
 
I'm planning for a new server and wondering whether i should opt for a NIC capable of SR-IOV.

AFAIK, SR-IOV has obvious advantage of not causing CPU overhead if i give each VM a VF, but VM traffic still need to go all the way up to switch before coming back to same physical NIC.

For virtual bridge, all traffic are internal within physical host so that might be better for stability? Correct me if i'm wrong

Latency is deciding factor for me and i don't know what approach give me better latency. I will be using iSCSI storage VM and a couple of other VMs.

Thanks for help!
Yes you should, and no you dont have to configure it as such that traffic has to flow across the bridge.

You bridge the PF for management only and then assign the VMs with direct SR-IOV capable nics. here is mine.
SoQi5rHbnCou.png


the two enp are physical ports, one of them is WAN and one is LAN for me. The WAN one is passed to pfsense and the LAN goes to my switch.

I am accessing the proxmox management through the bridge on the PF but thats only for the management. If I attached a VM to the bridge then they wouldn't have an exit path because the traffic has no exit (no WAN).

The VF's that are created from the LAN PF's do though, as the other VM LAN VFs are able to talk to the pfsense LAN VF and then the PFsense LAN VF can talk to the pfsense WAN PF

this is probably one of the best setups if you desire the lowest possible latency outside of going full DPDK / TNSR / VFF [something i need to pick up at some point]
 
Yes you should, and no you dont have to configure it as such that traffic has to flow across the bridge.

You bridge the PF for management only and then assign the VMs with direct SR-IOV capable nics. here is mine.
SoQi5rHbnCou.png


the two enp are physical ports, one of them is WAN and one is LAN for me. The WAN one is passed to pfsense and the LAN goes to my switch.

I am accessing the proxmox management through the bridge on the PF but thats only for the management. If I attached a VM to the bridge then they wouldn't have an exit path because the traffic has no exit (no WAN).

The VF's that are created from the LAN PF's do though, as the other VM LAN VFs are able to talk to the pfsense LAN VF and then the PFsense LAN VF can talk to the pfsense WAN PF

this is probably one of the best setups if you desire the lowest possible latency outside of going full DPDK / TNSR / VFF [something i need to pick up at some point]
I have same problem with bridges once SR-IOV enabled. VM attached to bridge won't communicate with anything except other VMs on same bridge. I tried moving bridge port for vmbr from PF to another VF it still doesn't work.

Kinda wish that proxmox support DPDK for OVS.
 
For obscure drivers decisions on intel's i40e drivers, you have to manually add each MAC's adress that can "share" the PF port and communicate between them AND with the physical port on the PF:

Code:
iface enp2s0f1np1 inet manual # LAN PF

iface enp2s0f1v0 inet manual # VF0 passed trough SR-IOV to a VM

auto vmbr0
iface vmbr0 inet static
        bridge-ports enp2s0f1np1
        bridge-stp off
        bridge-fd 0
        offload-rx-vlan-filter off
        hwaddress ether xx:xx:xx:xx:xx:xx # MAC of the bridge
        up /sbin/bridge fdb add xx:xx:xx:xx:xx:xx dev enp2s0f1np1 # MAC of the bridge
        up /sbin/bridge fdb add xx:xx:xx:xx:xx:xx dev enp2s0f1np1 # MAC of VF0 attached to a VM (SR-IOV/PCI passtrough)
        up /sbin/bridge fdb add xx:xx:xx:xx:xx:xx dev enp2s0f1np1 # MAC of a direclty attached VM on the bridge...

This can be automatized with this script :
https://github.com/jdlayman/pve-hookscript-sriov
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!