Mellanox DPDK support in Proxmox

Rares

Well-Known Member
Feb 28, 2017
79
13
48
What is your experience with DPDK and why is it now supported in Proxmox?

I have read all kind of articles that shows how they got 20% improvements by havind DPKG enbaled, how they did their Phisical-Virtual-Phisical tests, about the context swiching in the kernel related to the network bridge etc etc.

Is is really so? Is this such a good thing that it is worht recompiling? Couldn't you just include this "flag" to when you prepare the Proxmox distro?
or
Did you observer stability issues? Did you observer downgraded performance in other areas if you have this enabled? Are there any other issues that I should be are of?

We have two Mellanox Connext 5, dual 100Gb port cards per server and I don't know if I should research more about this subject or not. The idea was to make a 400Gb link with 2 switches in MLAG for redundancy.

Thank you,
Rares
 
Hi, don't expect to reach 400Gb with a single server. (you'll saturated the pci/memory/numa bus before reach such bandwidth).

About dpdk, it's not enough for the vm, you need vhost-user implementation for qemu, and it's not supported by proxmox currently.

last openvswitch from proxmox repo have a dpdk version if you want to test (apt install openvswitch-switch-dpdk).
but it's really miss vhost-user code currently to be able to test it.
 
Hi, don't expect to reach 400Gb with a single server. (you'll saturated the pci/memory/numa bus before reach such bandwidth).

Yes I know, we got a maximum of ~120Gb / card because of the PCIe limitation. The idea of dual port is for MC-LAG. But even here we observer a difference if we assign the IP directly on the interface or via Linux bridge/bond.

About dpdk, it's not enough for the vm, you need vhost-user implementation for qemu, and it's not supported by proxmox currently.

last openvswitch from proxmox repo have a dpdk version if you want to test (apt install openvswitch-switch-dpdk).
but it's really miss vhost-user code currently to be able to test it.

Can you confirm that by just installing openvswitch-switch-dpdk + the default Mellanox drivers from Proxmox I will have DPDK support at leat in OVS?

Do you think this will make any improvements if I use OVS bridge/bond instead of the Linux bridge/bond? Will this make any difference for containers and Ceph?

Thank you,
Rares
 
>>Can you confirm that by just installing openvswitch-switch-dpdk + the default Mellanox drivers from Proxmox I will have DPDK support at leat in OVS?

yes. (but it's require some ovs tuning too, you need to look at ovs+dpdk doc, it's also need hugepages,...)

>>Do you think this will make any improvements if I use OVS bridge/bond instead of the Linux bridge/bond? Will this make any difference for >>containers and Ceph?

It's really depend of your workload. In recent test, I'm able de to reach 100-150gbps with 32cores (but with big paquets).

It's more a problem number of packets per seconds. (if you want to 10 millions pps by core, you'll need dpdk).

if you need to do videostream with big packets, it could work without dpdk.


About pci bandwidth, here an interesting article from netflix
https://wccftech.com/netflix-evalua...ors-single-epyc-compared-to-dual-socket-xeon/

(tldr: it's really not easy to reach more than 100gbits with current architectures without a lot of tuning)
 
Hi, don't expect to reach 400Gb with a single server. (you'll saturated the pci/memory/numa bus before reach such bandwidth).

About dpdk, it's not enough for the vm, you need vhost-user implementation for qemu, and it's not supported by proxmox currently.

last openvswitch from proxmox repo have a dpdk version if you want to test (apt install openvswitch-switch-dpdk).
but it's really miss vhost-user code currently to be able to test it.

Hi , is there any work-around for prxmox qemu for vhost-user implementation, and I did see a patch https://pve.proxmox.com/pipermail/pve-devel/2016-May/020947.html , but not sure is it still valid or not?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!