VM network speed limits?

jchung

New Member
Oct 18, 2024
6
1
3
I have a recently acquired Supermicro server. X10DRH-iT motherboard, 2 x Xeon E-2630L v4, and 128GB ECC RAM (8 x 16GB sticks), quad port 10Gig ethernet adapter.
I added all 4 ports from the quad port 10Gig adapter to a linux bond interface (bond0). I created a linux bridge interface (vmbr1) and assigned the bond0 interface.

I installed the latest stable Proxmox and created a VM for Truenas Scale. I allocated 8 "host" CPUs and 32 GB RAM. VirtIO for the VM's network adapter.
From within the VM, I run iperf3 in server mode. From the Proxmox host I run perf3 client to the iperf3 server in the VM.

iperf3 reports at most about 25 Gbps and the VM appears to be using only about 2.5 CPUs. There is no other activity on the TrueNAS VM at the moment.

Am I running into single core/thread CPU performance limits that's restricting me from getting more than 25Gbps between the proxmox host and the TrueNAS scale VM ?
 
You must iperf3 in multithreaded mode with eg. "-P 4" to get higher results and for just single thread your measured 25Gb is ok.
 
You must iperf3 in multithreaded mode with eg. "-P 4" to get higher results and for just single thread your measured 25Gb is ok.
I've tried with -P 2, -P 4, -P 8, and -P 16. Max I've been able to get is about 25 Gbps.

I did try an iperf3 local to the host (host as both server and client using localhost as the IP) and I can get ~ 30 Gbps. When I do the same iperf3 test inside the TrueNAS VM, I can get 48 Gbps. So it looks like the Proxmox host would be the bottleneck.
 
Some more info, I created a Debian 12.5 LXC container and another Ubuntu 22.04 VM. I tested from the LXC container (client) to the TrueNAS VM (server) with iperf3. With -P options of 4, 8, and 16. I could get at around 17Gbps. From the Ubuntu VM (client) to the TrueNAS VM(server), I could get about 16 Gbps.
 
Mmh, maybe a pve kernel problem. As I don't have any bonded config here I cannot test myself, sorry.
Maybe you have still installed a 6.5.x kernel you could boot in your pve inst. which you can test ?
 
Mmh, maybe a pve kernel problem. As I don't have any bonded config here I cannot test myself, sorry.
Maybe you have still installed a 6.5.x kernel you could boot in your pve inst. which you can test ?
Thanks for the suggestion. Unfortunately I can only go back to 6.8.4.
 
Mmh, maybe a pve kernel problem. As I don't have any bonded config here I cannot test myself, sorry.
Maybe you have still installed a 6.5.x kernel you could boot in your pve inst. which you can test ?
Ok. I'm attempting to downgrade my Kernel to 6.5.13.
 
With a hint from this post https://forum.proxmox.com/threads/iperf3-speed-same-node-vs-2-nodes-found-a-bug.146805/, I was able to determine the CPU the quad port 10Gig NIC is assigned to. Then I reduced the CPU cores for my TrueNAS Scale VM to 4, and pinned them to the same CPU as the NIC.

iperf3 from the host to the TN Scale VM is now up to 33-36 Gbps. So an improvement.

I also spent a lot of time over the weekend upgrading the firmware on the quad port NIC from a version from 2015 to a version from 2023. But I think the biggest improvement was pinning the VCPUs to PCPUs. Looks like if I cross PCPUs and PCI busses, I'll be limited to about 16Gbps. And somewhere between 16-36Gbps depending on how many processes cross PCPUs.

At this point, I don't know if it's a limitation of Proxmox or if I'm just CPU bound on my network performance.
 
  • Like
Reactions: waltar

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!