100G network card and interrupt handling (ksoftirqd process loads a single CPU core at 100%)

alicho

New Member
Jun 10, 2025
1
0
1
I have a test server with a Mellanox ConnectX-5 (MT27800 Family, 100GbE, dual-port QSFP28) network card installed.
The server runs the Proxmox 8.4.
For testing purposes, I created a virtual machine with Ubuntu 22.04 (VM 1).
I allocated one port of the Mellanox network card to the virtual machine using PCI Passthrough technology.
I also have a second identical server with a similar virtual machine (VM 2).
The two servers are connected via an AOC 100G QSFP28 3m cable.
I am trying to test the 100G channel bandwidth using the UDP protocol.

On VM 1, I launch 8 instances of iperf3 in server mode:
# iperf3 -s -p 3001 -D -4 ; … ; iperf3 -s -p 3008 -D -4

On VM 2, I launch 8 instances of iperf3 in client mode:
# iperf3 -c <vm1-ip> -Z -p 3001 -u -b 15G -l 3500 ; … ; iperf3 -c <vm1-ip> -Z -p 3008 -u -b 15G -l 3500

As a result, the total speed across all 8 streams is approximately 24 Gb/s.
During the test, I observe one CPU core on VM 1 at 100% load, while the other cores remain idle. The load on a single core is caused by the ksoftirqd process.
I understand that this process is responsible for handling IRQ interrupts. Several such processes are running in the OS (equal to the number of CPU cores), and according to numerous articles online, interrupt handling should be parallelized across multiple CPU cores.

If I install a regular Ubuntu on the servers and run the same test, everything works as expected. I achieve significantly higher speeds, and the ksoftirqd processes load multiple CPU cores. This means the network card distributes incoming UDP traffic across multiple queues.

How can I achieve the same result in a Proxmox virtual machine?