Networking benchmarks on quemu guest

pmisch

Member
Feb 6, 2020
11
1
23
46
Hi,

I'm planning to run different networking benchmarks on guests. My physical host has two times 10G ports which I would like to utilize in the guest.
There's another dedicated host, also two times 10G that will be connected to the two ports on my Proxmox host. I will then pump traffic through the VM via T-Rex.

What would be the best option for getting the ports through to the guest? I figure using normal bridges will have the worst performance, is my assumption correct?
What about OVS? Will it perform better than Linux bridges?
Or would the third option be best, passing through the PCIe device to the guest?

I need to say that I won't need any offloading features whatsoever. My guest OSes will not make use of them, e.g. in OPNsense all those features are disabled by default anyway.

Thank you

Edit:
There seems to be another option, using OVS with DPDK. Anyone has experience with that?
 
Last edited:
to be clear, do your host have networks interfaces above and beyond the 2 10G? will more than 1 proxmox guest will be utilizing the 10g ports per host?

When utilizing Virtio and multi-qeue with or without jumbo frames some folks have basically saturated a 10g link with both basic Linux bridges and OVS , so you may want to start there for ease of use.

SR-IOV, if the network interface and host MB are capable, offers the best combination of performance plus the ability to natively utilize the port in multiple VM’s simultaneously, IMO.​
pass through has native performance but will limit the port to use with only 1 guest vm (sometimes all ports on a single PCI device are sequestered).​
ovs has shown similar to slightly better overall throughput relative to basic Linux bridges. Ovs-dpdk, not sure anyone has gotten this working on proxmox​
linux bridges - easiest to set up.​
 
to be clear, do your host have networks interfaces above and beyond the 2 10G? will more than 1 proxmox guest will be utilizing the 10g ports per host?

When utilizing Virtio and multi-qeue with or without jumbo frames some folks have basically saturated a 10g link with both basic Linux bridges and OVS , so you may want to start there for ease of use.

SR-IOV, if the network interface and host MB are capable, offers the best combination of performance plus the ability to natively utilize the port in multiple VM’s simultaneously, IMO.​
pass through has native performance but will limit the port to use with only 1 guest vm (sometimes all ports on a single PCI device are sequestered).​
ovs has shown similar to slightly better overall throughput relative to basic Linux bridges. Ovs-dpdk, not sure anyone has gotten this working on proxmox​
linux bridges - easiest to set up.​
Thanks for the reply. The proxmox host is dedicated for benchmarking, so no other guest will utilize neither CPU nor RAM nor Ethernet.

Onen thing I didn't consider is that the guest OS might not support the interface, so passthrough might the a worse option. On top I don't think my mainboard impelements IOMMU... so this options seems gone.

I will try this multiqueue feature, thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!