SR-IOV success stories?

rungekutta

Member
Jul 19, 2021
32
4
13
48
Hi, very happy Proxmox user here.
However... what are the real success stories with SR-IOV networking which could maybe be shared here? I recently attempted this, based on the steps in the documentation complemented by other more detail guides online (as to how to set VLANs on VFs etc). However the end results were... disappointing. Using iperf3 to test, lots of packet drops / retransmissions between VMs attached to VFs on the same Nic, with changing behaviour depending on whether the VFs were on the same or different physical port on the Nic. Same port bad, different port good. The behaviour also seemed to change with MTU and packet size - if I forced iperf3 to use smaller packet loads (below 1400) then performance was much more consistent although generally poor. Switching back to Linux bridges again, and all was good.

Hardware: Supermicro X11-SSM motherboard, Xeon 1225 v6 CPU, Intel XXV710-DA2 NIC, all with latest firmware from official sources. Proxmox 8.4.1 and test VMs vanilla Debian 12.

What are you experiences?
 
Last edited:
Hello,
i'm not so far as you .. but will hopefully soon.

NIC: Broadcom P225P - 2 x 25/10G PCIe

My investigation until now with this NIC ..
  • use newest Firmware
  • use newest Driver
  • use OVS if applicable (not the Linux Bridges)
  • check and make the right settings (just around "100" of them ) on the card itself and in the Bios
    • Number of VFs, partitioning, NPAR, MTU, Bandwidth limits, Offloads, QoS, "behaviour"..
    • as example: the P225P allows full offload from OVS + depending on some settings, routes traffic always to the next Switch or acts as local Switch (with much higher Bandwidth)
  • check/optimise your network settings on your directly attached switch(es)
.. it's not just activating SR-IOV (as i initially thought)
.. and the documentation on the INet is very fragmented
 
Thanks. Following on to my first note, I got a little further. According to Intel’s own documentation, MTU must be the same on physical and all virtual functions, otherwise leads to undefined behaviour (which is what I saw). Previously I have run MTU 9000 on all physical ports and then either 9000 or 1500 on Linux bridges as relevant, which has worked well, but clearly isn’t a pattern that works with SR-IOV.

Also, my idea was to let host use physical port, and assign VFs to VMs. Not entirely clearly whether this is supposed to work, or if you can *only* use VFs, also for host, once you’ve started down that route. I saw some weird behaviour here as well.

All documentation from Intel and Mellanox assumes that you compile and use their proprietary drivers and tools, which I have no interest in doing, which is another complication.

Finally, obviously you can’t use the Proxmox firewall anymore as it’s per definition bypassed. Fully logical but a bit of a shame as I’ve used it for VM to VM isolation (in effect running each VM in its own DMZ).

So, yeah, mixed bag still…