Networking - Ubuntu DPDK & SR IOV

Bhupinder

New Member
Jan 12, 2022
23
0
1
ENVIRONMENT: We are having 20MB/s networking connection on our private cloud in the datacenter. The severs are Supermicro these are connected with a MELLANOX TOR Switch and Mellanox Cards (Ethernet) and have both on board NIC cards ports (10 GBPS x2) and Mellanox Card ports (50GBPS x2). The Server to server data transfer is meeting the expected speeds. (Test can be provided if required). We have installed Proxmox VE on bare metal.

Our operating needs are VMs (both Linux & Windows) for running applications and also databases. Some of these VMs are connected and communicate with each other.

For testing we have created and shall have three operating environments

  • Unit Testing – We call this staging
  • Pre-production – We call this beta.
  • Production - Live environment.
Each of these have around 6-7 VMs some Linux Ubuntu 20.04 others Windows Vms.

We have also test Ceph Cluster created. (This is Ceph on Ubuntu). It consists of Ceph Master and three Ceph nodes for Pilot.

ISSUES;

When we try to download anything within proxmox server or any VM's, the download speed is around 1.5MB/s only. Could you please help us in enhancing the download speed? There is a latency when we execute commands between VMs.

Question 1: Will it help to improve the performance if we do the following?



Enable DPDK on Ubuntu
Enable Open v switch and communicate directly with the hardware
Enable SR IOV

Question 1A: If yes then what are the points that we need to keep in mind while configuration and the change in the settings that need to be made in the firmware / Proxmox VE on Debian and in Ubuntu running the VMs.
Question 2: How should we setup the NIC cards interfaces to get optimum configuration and throughput in line with the hardware speeds listed above under environment?
Question 3: We propose to use NVME for production Environment. What is the best practice when we add them to the PROXMOX Server and the configuration
changes that we need to keep in mind while doing so.

Thanks for all the help as we are new to PROXMOX and need the august community to provide guidance
 
Apologies
Meant that the connection from the external environment is the bandwidth that a service provider giving bandwidth is providing that land on our switch.
I hope I have clarified. From the TOR switch (Mellanox) we have 20 GBPS and 50 GBPS in full duplex mode. The speed issue is on communication between VMs on both the same server and across two physical servers.
 
Code:
. The speed issue is on communication between VMs on both the same server
how do you bench ? what is the bandwith bench result between 2 vms ? do you use virtio nic in your vm ? (don't use e1000 or realtek)
(between 2 linux vms, you should be able to reach 10gbit/s without problem.
 

Code:
. The speed issue is on communication between VMs on both the same server
how do you bench ? what is the bandwith bench result between 2 vms ? do you use virtio nic in your vm ? (don't use e1000 or realtek)
(between 2 linux vms, you should be able to reach 10gbit/s without problem.
We are using a Linux Bridge -- VirtIO (paravirtualized)
Rate limit set to Unlimited.
The iperf file is attached
 

Attachments

  • IPERF tests in different scenarios.pdf
    188.8 KB · Views: 6
Speed Tests done at hardware level
 

Attachments

  • NIC CARD & NETWORK SPEED AND NIC ID SELECTION.pdf
    278.1 KB · Views: 6
so, for vm1 -> vm2, you seem to be limited at 1gbit/s. Are you really sure that both vms are on the same proxmox server and same vmbr ?


from laptop<->vm, so it's limited by your isp bandwith , nothing related to qemu performance.


(you really don't need dpdk, srv-io,... until you want to reach more than 10gbits)
 
so, for vm1 -> vm2, you seem to be limited at 1gbit/s. Are you really sure that both vms are on the same proxmox server and same vmbr ?


from laptop<->vm, so it's limited by your isp bandwith , nothing related to qemu performance.


(you really don't need dpdk, srv-io,... until you want to reach more than 10gbits)
Yes Both the VMs are on the same physical machine.
1. There are harddisks that will hold the Ceph Node and these will communicate with nodes on other machines also. While executing commands from the Ceph Master to the nodes on the same machine there are heavy latency and time out issues. We are under the belief that there is choking somewhere. when we doing a test using four VMs on the same machine.
2. Further we want to reduce the page response time to minimum for consumer satisfaction.
3. The number of concurrent users will be beyond 10,000/- hence the transactions need to get completed and recorded in the database.
3. Being a hybrid microservices architecture we have high communication between the VMs.

The thought was using DPDK we bypass the linux kernel for communication beteen VMs on the same Physical machine. & USe SR-IOV to increase through put between Physical machines.

We are looking guidance on this and point us in the direction of how to increase the throughput in the above use-case. The hardware is designed to take a throughput of 25 GBPS and 50 GBPS and is already in place.

Hope this clarification helps.
 
Here an example of iperf between 2 vms on same server/same vmbr in my production (xeon 3ghz) without any special tuning, using virtio-win nic.


Code:
# iperf -c x.X.X.X
------------------------------------------------------------
Client connecting to X.X.X.X, TCP port 5001
TCP window size: 1.00 MByte (default)
------------------------------------------------------------
[  3] local X.X.X.X port 57412 connected with X.X.X.X port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  8.06 GBytes  8.12 Gbits/sec


if both vm are on same server and same vmbr/vlan, the traffic is not going through the physical interface.

your result look very strange, because it's really seem to be caped to 1gbit/s


also, looking at your ethtool result

Code:
root@activewellnessserver:~# ethtool enp26s0f0  (or enp26s0f1)
Settings for enp26s0f0:
...
Speed: 1000Mb/s
Duplex: Full

you have 1gbit/s , not 10gbit/s.

(maybe wrong switch configuration, wrong cable, dont known..)





on the hypervisor where are your benching, can you send the content of

/etc/network/interfaces

and

/etc/pve/qemu-server/<vmid>.conf (for the 2 vms where you test iperf , replace <vmid> by the id the vm)


?
 
Hi
Sending the details DO let me know if I can provide any thing more
 

Attachments

  • Interface Details - For review.txt
    3.4 KB · Views: 5
mmmm,I still downt known why you can't reach more than 1gbit

from your test:
Code:
From proxmox VM1 to VM2:
root@CEPHNode-3-Beta:~# iperf -c 172.16.0.90
------------------------------------------------------------
Client connecting to 172.16.0.90, TCP port 5001
TCP window size: 620 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.0.91 port 35004 connected with 172.16.0.90 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.08 GBytes 928 Mbits/sec

are you 100% sure that 172.16.0.90 is the vm CEPHNode-2-Beta ?


because your vmbr0 is indeed using your nic running at 1gbit/s instead 10gbit/s, so it's really looks like it's going outside through this link.
 
CEPHNode-2-Beta -- This is 152 --- IP is 172.16.0.90
CEPHNode-3-Beta -- This is 153 --- IP is 172.16.0.91
Checked and confirming here.
Could there be a possibility of no bonds defined that could be a reason ?
 
Last edited:
Also enclosing the System PVE REPORT
 

Attachments

  • activewellnessserver-pve-report-Tue-20-September-2022-10-31.txt
    316 KB · Views: 2
CEPHNode-2-Beta -- This is 152 --- IP is 172.16.0.90
CEPHNode-3-Beta -- This is 153 --- IP is 172.16.0.91
Checked and confirming here.
Could there be a possibility of no bonds defined that could be a reason ?
no. as I said, if both vms are on same proxmox node, and use same vmbr0, the network traffic is not going to the physical nic,
so you have more than 1gbit/s.

Can you try:
-enable queues=2 on the vm nic advanced options

and ,
if you use iperf2, try to connect with "iperf -P 2 -c ...."
or
if you us iperf3, try to launch 2x "iperf -c ..." command in parallel.

and see if cumulated bandwith is bigger than 1gbit/s.



Also, do you have looked to fix your physical 1gbit/s -> 10gbit/s link ?
 
The outputs are attached
 

Attachments

  • Between two VMs 2022-09-20 at 1.39.54 PM.jpeg
    Between two VMs 2022-09-20 at 1.39.54 PM.jpeg
    65.8 KB · Views: 4
  • Laptop to VM 2022-09-20 at 1.40.34 PM.jpeg
    Laptop to VM 2022-09-20 at 1.40.34 PM.jpeg
    120.8 KB · Views: 4
Checked again Changed to "virtio" instead of "e1000". i.e, please edit the 153 VM and set the Network Model as a "virtio", you can edit the VM from PVE Web UI -> Datacenter->activewellnessserver->153->Hardware->Network Device.
From proxmox VM1 to VM2:
root@CEPHNode-3-Beta:~# iperf -c 172.16.0.90
------------------------------------------------------------
Client connecting to 172.16.0.90, TCP port 5001
TCP window size: 620 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.0.91 port 35004 connected with 172.16.0.90 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.08 GBytes 928 Mbits/sec

The output is attached
 

Attachments

  • Change of Network Model 2022-09-20 at 2.36.28 PM.pdf
    64.7 KB · Views: 2

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!