More than 10GBe between 2 VMs

ChAoS

Member
Apr 29, 2021
31
4
8
41
Hello,

we are using Proxmox to host our backupserver with veeam Backup to save our VMWare environment and also to host some other servers.

Currently we are backing up with a fibrechannel card which is passed through to the VM which works fine with its physical 8GB link directly into the FC storages.

Now we plan to change everything, getting a complete new VMWare environment (I know, as a proxmox fan it is a blame to use VMWare, but it is a political descision made by our company leaders). Backup should now go through ethernet.
I connected a 2x SFP+ card to the host and it will get 2x 25Gbit transceiver connected via fibre to a Ethernet Switch per LACP for the future backup (so a theoretical Speed at <= 50 Gbit/s may be possible).
Passing this card through results in unstable hardware, seems this card is "tricky" and does not like passing through. As this card is recognized in Proxmox correctly I decided to mount it to a vmbr with LACP (and get some percent performance loss).

As the Windows Server hosted by Proxmox tells "10 Gbe" with the virtio driver (most people tell, that it is only a cosmetical number) I tested with iperf between 2 VMs on the same proxmox host and on the same bridge. I am "only" getting a maximum of 10 Gbit/s - not more. The Server is a lenovo SR650 V1 with "32 x Intel(R) Xeon(R) Silver 4215R CPU @ 3.20GHz (2 Sockets)" which should have more than enough power to transmit more than 10 Gbit/s virtually.

Otherwise I remember I saw somewhere else 100 Gbit/s on virtio drivers in the past... now my question is:

Is 10 Gbit/s a cosmetical number where the real measurement is randomly the same
or
Is 10 Gbit/s the real maximum speed by the driver / virtio and is there any trick to bump it to 100 Gbit/s and get the full power of the hardware?

Thank you for any help

Dirk
 
the 10gbit/s is really only comestic.

do you have try to run multiple paralell connections with your iperf ?
also, iperf3 only use 1 core, with iperf2 can use multiple core for each connection. (just check that your iperf cpu is not 100% than 1 core.

Also, windows virtio drivers are known to be slower than linux. (I don't known if you have tried to bench between 2 linux vms ?)
 
@spirit

thank you for fast response.
I tested activating multique for both VMs (4 ques each)
and did the Test with IPerf 2 with 4 and 8 streams.
Result seems OK but I thought there was more power in the box.
Maybe with Linux, but un my scenario I have to use a Windows 2019 VM


------------------------------------------------------------
Client connecting to DC-2, TCP port 5001
TCP window size: 208 KByte (default)
------------------------------------------------------------
[ 4] local 10.1.111.8 port 62708 connected with 10.1.111.240 port 5001
[ 6] local 10.1.111.8 port 62710 connected with 10.1.111.240 port 5001
[ 3] local 10.1.111.8 port 62707 connected with 10.1.111.240 port 5001
[ 5] local 10.1.111.8 port 62709 connected with 10.1.111.240 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 4.35 GBytes 3.74 Gbits/sec
[ 6] 0.0-10.0 sec 7.93 GBytes 6.81 Gbits/sec
[ 3] 0.0-10.0 sec 4.78 GBytes 4.10 Gbits/sec
[ 5] 0.0-10.0 sec 4.78 GBytes 4.10 Gbits/sec
[SUM] 0.0-10.0 sec 21.8 GBytes 18.8 Gbits/sec

C:\Users\XXX\Desktop\iperf-2.0.9-win64>iperf.exe -c DC-2 -P8
------------------------------------------------------------
Client connecting to DC-2, TCP port 5001
TCP window size: 208 KByte (default)
------------------------------------------------------------
[ 10] local 10.1.111.8 port 62737 connected with 10.1.111.240 port 5001
[ 5] local 10.1.111.8 port 62732 connected with 10.1.111.240 port 5001
[ 8] local 10.1.111.8 port 62735 connected with 10.1.111.240 port 5001
[ 7] local 10.1.111.8 port 62734 connected with 10.1.111.240 port 5001
[ 9] local 10.1.111.8 port 62736 connected with 10.1.111.240 port 5001
[ 6] local 10.1.111.8 port 62733 connected with 10.1.111.240 port 5001
[ 4] local 10.1.111.8 port 62731 connected with 10.1.111.240 port 5001
[ 3] local 10.1.111.8 port 62730 connected with 10.1.111.240 port 5001
[ ID] Interval Transfer Bandwidth
[ 10] 0.0-10.0 sec 3.03 GBytes 2.61 Gbits/sec
[ 5] 0.0-10.0 sec 2.53 GBytes 2.18 Gbits/sec
[ 8] 0.0-10.0 sec 5.33 GBytes 4.58 Gbits/sec
[ 7] 0.0-10.0 sec 5.40 GBytes 4.64 Gbits/sec
[ 9] 0.0-10.0 sec 3.02 GBytes 2.60 Gbits/sec
[ 6] 0.0-10.0 sec 2.50 GBytes 2.15 Gbits/sec
[ 4] 0.0-10.0 sec 2.54 GBytes 2.18 Gbits/sec
[ 3] 0.0-10.0 sec 2.50 GBytes 2.15 Gbits/sec
[SUM] 0.0-10.0 sec 26.9 GBytes 23.1 Gbits/sec


Thank You so much

Dirk
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!