(EVPN) SDN inter-node vm to vm throughput looks degraded

freakingObelix

New Member
Mar 11, 2025
16
1
3
Hello folks, well... I know there are many other posts about this and I'm just not sure if I did everything that was suggested in those posts but so far:
1. used multiqueue in both VM adapters
2. tested in the same node ~13gbps throughput (iperf3), and different node ~1.6gbps with or without multiqueue enabled.
3. both phisical nodes are bare metal, dual 10gbps NIC NOT shared with storage, dedicated 10gbps NIC for SDN

My HW is a bit old, but solid. Dual cpu xeon X5670 on each node, NICs are HP (Emulex) NC553i.
I'm using Non-registered repos, v8.4.1, updated today. Frr 10.2.2-1+pve1

Should I expect more throughput and assume I did something wrong / there is any issue with kernel or any module? Or is this normal?

Thanks for helping!
 
I really don't known how vxlan are performing with such "bit old" -> 2010 cpu ;)

also modern nic have vxlan offloading, so it'll be full cpu here for vxlan encapsulation.

maybe try to disable spectre,meltdown,...mitigations.
 
I was truly afraid of such answer xD already tried disabling, no changes, so rolled back.
maybe check your cpu stats if you don't have a core at 100%, as it's quite possible that old nics don't have the RSS feature compatible with vxlan, so are not able to dispatch vxlan traffic across multiple cores.