Non-optimal routing speeds with Proxmox 7


Nov 20, 2020
Hey all,

I'm in the process of setting up a Mikrotik CHR (Virtual Router) for routing 10Gb network traffic. However, testing with EXFOs has shown that, with random packet size, I am not getting anywhere near 10Gb/s (I'm getting around 6Gb/s average, should be closer to 8Gb/s). I've tried routing with the Proxmox host itself, and I'm getting similar results.

Routing full 1470 bytes packets work well, and I'm getting the expected 9.9Gb/s throughput.

I'm going to try and run PFSense directly on the host to see if I can get better results with FreeBSD.

Is anyone here running virtual routers with 10Gb/s network throughput?

Hardware wise, I'm using a Dell XR12 as a host. I have two network cards on the chassis, one Dell X810 (with the latest ICE drivers installed on the Host) and a Broadcom card that's not been working great. The server has an Intel(R) Xeon(R) Gold 6338N CPU and 256Gb of DDR4 Ram. I've toggled hypert hreading off, but without any changes.

Thanks for your answer. I've tried multiqueue and virtio, but I'm not getting much more. I've tried passing the broadcom card via pci passthrough directly to the CHR, and its been working somewhat better, but I'm still getting some frame loss & poor routing performance with random sized packets.

I'm not familiar with xdp, so I will learn about that. Do you have any papers or reading related to it?

Alright, I've tried vyOS bare metal on a similar chassis, and I'm getting wonderful routing speeds (10G at random packet size). Issue is, when virtualizing vyOS, I'm hitting the same performance issues again, even with the whole pci card given to the vm via pci passthrough. I'm seeing very high CPU usage for IRQ interruption on the host, and it seems to kill the performance.

Am i doing something wrong?
we've been virtualizing mikrotik CHR routers on PROXMOX for several years now. In my opinion the performance limitation is the cost of processing packets twice and at kernel level in the host and guest network stacks. As you notice there is no pb to achieve 10Gbps with a single TCP session , thanks to TSO (processed in nic controller). But TSO efficiency decrease with many TCP sessions. Other protocols like udp cannot profit hw offloading so the datapath is painfull.
There are many ways to reach 10Gbps at random packet size , here my well-known :
1- SRIOV with pci passthrough let publishing the nic directly in the guestOS (number of VF depend on the network card, generally limited to 127 per physical port), but need to have the nic driver supported by CHR)
2- userspace packet processing, most popular network stacks are probably ovs-dpdk and vpp (speed and stable network stack)
3- kernel packet processing with optimization and/or bypass solution, as I've read XDP is very promising (full compat with classical linux control plane tools)
4- There are specialized network card like DPU or smartcard that can accelerate network processing (need more integration effort)
Since last post have you made any progress ?
Last edited:

As you've stated, the linux kernel is the bottleneck, and solutions to this include XDP and DPDK. (I've also dabbled with VPP, though I must say the documentation is harder to understand for me).

This hasn't been a priority for us, but I'm sure 10G routing on virtual machines will be back on the menu soon enough.

Bonne journée


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!