[SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

Ramalama

Well-Known Member
Dec 26, 2020
759
148
53
35
I just stumbled over something very weird with LXC Containers, but i bet it happens with VM's either:

i have 2 Identical Nodes in a Cluster:
- both are connected over 2x25G in LACP (NIC: Intel E810)
- CPU: Genoa 9374F
- RAM: 12x64GB (All Channels 1DPC) 768GB
- Storage: ZFS Raid10 (8x Micron 7450 Max)

So those nodes are everything, but not slow in any regard. PVE is working great, its actually the first issue i have.
I have more as 10 PVE Servers and im around here since forever possibly.
--> What i mean is, this issue should have everyone! But probably no one realized, maybe even came to the idea to test.

I imagined at least till today, that an iperf3 test, or general network speed, should be insane on a communication between 2 LXC's/VM's on the same Node.
Because the Packets doesn't leave the Node (if both Containers/VMs are in the same Network), means they never actually leave the vmbridge.

But this is absolutely not the case...
i get with iperf3 (no special arguments, just -s and -c) fullowing:
- Both LXC's on the same Node: 14.1 Gbits/sec
- Each LXC on separate Node: 20.5 Gbits/sec

It feels like my understanding is broken now, because the Packets leave the Host, so there is at least the Hardcore Limit of 25G.
But on the same Host, you actually don't have any Limits, so i expected to see at least like 40Gbits/s

Anyone has a Clue, tryed already an iperf3 test?
Do at least like 3-5 tests please.

My issue is, thats the first servers with 25G links i have, all others have like 10G, and 10G was never an issue.
But i never camed to the idea to test Iperf3 on 2 Containers or VM's on the same node xD

Cheers
 
i just migrated the Container back, so that both are again on the same Node and retested:

Code:
iperf3 -c 172.17.1.122
Connecting to host 172.17.1.122, port 5201
[  5] local 172.17.1.129 port 35156 connected to 172.17.1.122 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  2.24 GBytes  19.2 Gbits/sec    0    530 KBytes       
[  5]   1.00-2.00   sec  3.30 GBytes  28.3 Gbits/sec    0    530 KBytes       
[  5]   2.00-3.00   sec  3.70 GBytes  31.8 Gbits/sec    0    530 KBytes       
[  5]   3.00-4.00   sec  4.09 GBytes  35.1 Gbits/sec    0    530 KBytes       
[  5]   4.00-5.00   sec  3.90 GBytes  33.5 Gbits/sec    0    530 KBytes       
[  5]   5.00-6.00   sec  3.69 GBytes  31.7 Gbits/sec    0    617 KBytes       
[  5]   6.00-7.00   sec  3.71 GBytes  31.8 Gbits/sec    0    617 KBytes       
[  5]   7.00-8.00   sec  3.87 GBytes  33.2 Gbits/sec    0    617 KBytes       
[  5]   8.00-9.00   sec  4.11 GBytes  35.3 Gbits/sec    0    617 KBytes       
[  5]   9.00-10.00  sec  4.08 GBytes  35.1 Gbits/sec    0    617 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  39.4 GBytes  33.8 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  39.4 GBytes  33.8 Gbits/sec                  receiver

iperf Done.

Code:
iperf3 -c 172.17.1.122
Connecting to host 172.17.1.122, port 5201
[  5] local 172.17.1.129 port 60096 connected to 172.17.1.122 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.61 GBytes  13.8 Gbits/sec    0    396 KBytes       
[  5]   1.00-2.00   sec  1.51 GBytes  13.0 Gbits/sec    0    396 KBytes       
[  5]   2.00-3.00   sec  1.57 GBytes  13.5 Gbits/sec    0    396 KBytes       
[  5]   3.00-4.00   sec  1.55 GBytes  13.3 Gbits/sec    0    396 KBytes       
[  5]   4.00-5.00   sec  1.34 GBytes  11.5 Gbits/sec    0    788 KBytes       
[  5]   5.00-6.00   sec  1.57 GBytes  13.5 Gbits/sec    0    788 KBytes       
[  5]   6.00-7.00   sec  1.57 GBytes  13.5 Gbits/sec    0    788 KBytes       
[  5]   7.00-8.00   sec  1.56 GBytes  13.4 Gbits/sec    0    977 KBytes       
[  5]   8.00-9.00   sec  1.58 GBytes  13.5 Gbits/sec    0   1.39 MBytes       
[  5]   9.00-10.00  sec  1.53 GBytes  13.1 Gbits/sec    0   1.39 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  16.2 GBytes  13.9 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  16.2 GBytes  13.9 Gbits/sec                  receiver

iperf Done.

WTF is happening here?
Its both the exactly same test, both containers on the same Node.
I dont get why its so extremely inconsistent.
 
Okay i found the KEY issue.

When it runs at 13-14Gbits/s, im hitting a Single Thread/Core limit on the PVE Host itself!
But im not seeing which process is eating that one core, so that must be a Kernel or a Module.

When it runs at 34Gbits/s, i am not hitting a Single Thread/Core limit, instead 2-4 Cores are hitting like 80%.

Seems to me like some sort of bug in the Kernel itself, since virtio/vmbr is all directly the Kernel itself.
Nothing leaves the Host, so it can't be the E810 nic or any drivers related to that.

Anyone aware of any "Tuning" or something to force Multithreading for virtio or vmbr? I mean it obviously does multithreading anyway when it hits 34Gbit/s.
Im not sure if its an vmbr or virtio issue.
But i believe that everyone benefits a lot, if we can rule this out.

Cheers
 
edited : iperf3 is multi streams BUT cpu multi threads only since v3.16 (December 2023)
 
Last edited:
iperf3 is known to be a single cpu thread application.
iperf2 is multi threads.
There is no difference and it has anyway nothing with iperf itself todo.
You can do parallel with iperf2 and 3, there is no difference, but im not talking about parallel.

Im talking about something in vmbridge or kernel or ip stack in the kernel, that is sometimes multithreaded and sometimes single threaded.
What it is exactly probably no one can tell, i mean where the issue in the kernel is, since you cannot see the task that utilizes the core, or sometimes multiple cores.

The issue is, that iperf (not parallel, single connection) runs sometimes at 40G Speeds, and most of the time at 14-16G speeds. Depending if the kernel decides to use only a single core, or multiple cores.
 
iperf3 can use one thread per stream only since version 3.16
https://github.com/esnet/iperf/releases/tag/3.16
Before 3.16 , iperf3 is only one cpu thread even with parallel streams (-P).

edit: just tested constant 50 Gbits/s between two CT (default Alpine template) , iperf3 v3.14 (single stream and -P 4 streams same speed ) , CPU Xeon E-2386G
 
Last edited:
  • Like
Reactions: Ramalama
fwiw, 200 Gbits/s (!) between two default Alpine template CT over Linux bridge, with static build iperf3 v3.16 -P 8 streams and, so, 8 cpu threads on Xeon E-2386G ( with all 12 threads is 177 Gbits/s , 4 streams & threads 150 Gbits/s )
 
Single Socket Genoa 9375 / 12x DDR5 Memory Channels with 64GB-Dimms / Ultrafast Raid 10 out of 8x Micron 7450 MAX:
Code:
8 Streams:
Ubuntu 24.04:
[SUM]   0.00-10.00  sec   123 GBytes   105 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec   123 GBytes   105 Gbits/sec                  receiver
Alpine:
[SUM]   0.00-10.00  sec   125 GBytes   107 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec   125 GBytes   107 Gbits/sec                  receiver

One stream:
Ubuntu 24.04:
[  5]   0.00-10.00  sec  16.4 GBytes  14.1 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  16.4 GBytes  14.1 Gbits/sec                  receiver
Alpine:
[  5]   0.00-10.00  sec  16.4 GBytes  14.1 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  16.4 GBytes  14.1 Gbits/sec                  receiver

Single Socket Xeon Silver 4210R / 4x DDR4 Memory Channels with 64GB-Dimms / 4x Samsung 870 EVO in Raid 10:
Code:
8 Streams:
Alpine:
[SUM]   0.00-10.00  sec   140 GBytes   120 Gbits/sec  7460             sender
[SUM]   0.00-10.00  sec   140 GBytes   120 Gbits/sec                  receiver

One Stream:
Alpine:
[  5]   0.00-10.00  sec  34.7 GBytes  29.8 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  34.7 GBytes  29.8 Gbits/sec                  receiver

The Xeon is Compared to the Genoa, literally Crap.
But it is in Single Stream 2x faster.

The Genoa Runs at 4,3Ghz, at least the one Core that is at 100% during Iperf3.
The Xeon Goes up to 2,8Ghz only and can't do anyway above 3,2Ghz.
And retransfer is very high on the xeon in parallel, looks to me like something is wrong. I need to test probably on an earlier Proxmox Kernel, there is definitively something wrong, those speedtests make no sense to me.

It makes literally no sense at all.
 
Last edited:
Im Further with my Investigation:
If i set the LXC Container to use only one CPU-Core:

Code:
 ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.28 GBytes  36.7 Gbits/sec    0    508 KBytes      
[  5]   1.00-2.00   sec  4.27 GBytes  36.8 Gbits/sec    0    537 KBytes      
[  5]   2.00-3.00   sec  4.26 GBytes  36.6 Gbits/sec    0    598 KBytes      
[  5]   3.00-4.00   sec  4.23 GBytes  36.4 Gbits/sec    0    663 KBytes      
[  5]   4.00-5.00   sec  4.23 GBytes  36.4 Gbits/sec    0    663 KBytes      
[  5]   5.00-6.00   sec  4.24 GBytes  36.4 Gbits/sec    0    697 KBytes      
[  5]   6.00-7.00   sec  4.23 GBytes  36.3 Gbits/sec    0    799 KBytes      
[  5]   7.00-8.00   sec  4.22 GBytes  36.3 Gbits/sec    0    799 KBytes      
[  5]   8.00-9.00   sec  4.22 GBytes  36.3 Gbits/sec    0    799 KBytes      
[  5]   9.00-10.00  sec  4.22 GBytes  36.2 Gbits/sec    0    799 KBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  42.4 GBytes  36.4 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  42.4 GBytes  36.4 Gbits/sec                  receiver
Previously i used 6 Cores for my LXC Containers, this is getting very weird.
But setting to only 1 Core, increases the throughput from 14GB/s to 36GB/s, not even a Hypertrading Core should be that slow.
There is definitively something broken.

Seems like some sort of multithreading bug, or Proxmox have issues with Hyperthreading (or what its called on amd), maybe not Proxmox, but the kernel.

I have to disable Hyperthreading on my Genoa Servers and retest. 32 Cores should be enough for my VM's hopefully.
Cheers

EDIT: More testing:
Setting to 4 CPU Cores per LXC and using "iperf3 -c xxxx -P2" 2 Threads, is even worse xD
Code:
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  16.8 GBytes  14.4 Gbits/sec                  receiver
[  8]   0.00-10.00  sec  16.8 GBytes  14.5 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  33.7 GBytes  28.9 Gbits/sec                  receiver

- Setting to 2 Cores and 2 Parallel Iperf tests, starts to Vary, sometimes i get 72GB/s, most of the time 32,5GB/s
- Setting to 1 Core and 1 Iperf is very consistent! Always 36,4GB/s.
- 1 Core and 2/3/4/6 Parallel Iperf tests, reach 36,5GB/s all together. Same like only one Iperf test.

So in Conclusion, if i give an VM/LXC Container more as 1-Core (Tested only with LXC's) on Genoa at least, something weird starts to happen.
 
Last edited:
Still Debugging, but i found the Best example:
LXC Conainers with 2 Cores Assigned and iperf3 test with -P 2

Code:
[  5]   9.00-10.00  sec  1.61 GBytes  13.8 Gbits/sec    0   1.02 MBytes    
[  7]   9.00-10.00  sec  4.10 GBytes  35.2 Gbits/sec    0    513 KBytes    
[SUM]   9.00-10.00  sec  5.71 GBytes  49.0 Gbits/sec    0

Here its really great visible, one connection is running at ~14Gb/s and the other with 35,2Gb/s.
36,4Gb/s is the limit of what one real core can do. 14Gb/s looks to me like it used a hyperthreading core.

But again to mention, this is not iperf3 itself what causes the load, there is some sort of a bug in the PVE Kernel itself or a Module.
Probably a scheduler bug on AMD Systems.

- Intel is definitively not affected, since i tested this on all my Intel Servers, and they all act as expected!
- A Ryzen 5800x Server (Thats the only one other AMD-Server i have), is definitively not affected either.
- Both 9374F Servers are Affected!

So it seems like a Specific Genoa issue, shit, that means that im on my own :-(
 
tested on AMD EPYC 7302 16c/32t (Rome / 2th Gen / 2020 era) running PVE 7.2 and Kernel 5.15.35-1 (mitigations=off)
between 2 LXCs Containers (Alpine 3.18 default template from mid-2023) , 2 Cores assigned.
iperf3 is constant 15 Gbits/s & iperf3 -P 2 is contant 30 Gbits/s ( 2 x 15 Gbits/s )
 
Last edited:
15gb/s is a bit low for 2 containers on the same node and same network.
Very low, i reach on lowend e5 v3, extremely old xeons, at least 30gb/s per core/stream
 
Last edited:
Okay, im further in my research, there seems indeed some bug:

If i do an iperf3 test from LXC to Node directly:
Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.44 GBytes  37.9 Gbits/sec    0    434 KBytes       
[  5]   1.00-2.00   sec  4.33 GBytes  37.3 Gbits/sec    0    458 KBytes       
[  5]   2.00-3.00   sec  4.38 GBytes  37.6 Gbits/sec    0    458 KBytes       
[  5]   3.00-4.00   sec  4.40 GBytes  37.8 Gbits/sec    0    458 KBytes       
[  5]   4.00-5.00   sec  4.38 GBytes  37.6 Gbits/sec    0    458 KBytes       
[  5]   5.00-6.00   sec  4.37 GBytes  37.6 Gbits/sec    0    458 KBytes       
[  5]   6.00-7.00   sec  4.34 GBytes  37.3 Gbits/sec    0    458 KBytes       
[  5]   7.00-8.00   sec  4.35 GBytes  37.4 Gbits/sec    0    458 KBytes       
[  5]   8.00-9.00   sec  4.35 GBytes  37.4 Gbits/sec    0    458 KBytes       
[  5]   9.00-10.00  sec  4.38 GBytes  37.6 Gbits/sec    0    458 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  43.7 GBytes  37.6 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  43.7 GBytes  37.6 Gbits/sec                  receiver
It is very consistent! No matter how much times i try.

If i do the other way, from Node to LXC:
Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.49 GBytes  38.6 Gbits/sec    0    505 KBytes
[  5]   1.00-2.00   sec  4.48 GBytes  38.5 Gbits/sec    0    505 KBytes
[  5]   2.00-3.00   sec  4.48 GBytes  38.5 Gbits/sec    0    533 KBytes
[  5]   3.00-4.00   sec  1.94 GBytes  16.7 Gbits/sec    0    560 KBytes
[  5]   4.00-5.00   sec  1.72 GBytes  14.8 Gbits/sec    0    560 KBytes
[  5]   5.00-6.00   sec  1.73 GBytes  14.9 Gbits/sec    0    560 KBytes
[  5]   6.00-7.00   sec  1.73 GBytes  14.8 Gbits/sec    0    560 KBytes
[  5]   7.00-8.00   sec  1.73 GBytes  14.9 Gbits/sec    0    631 KBytes
[  5]   8.00-9.00   sec  1.73 GBytes  14.8 Gbits/sec    0    631 KBytes
[  5]   9.00-10.00  sec  1.73 GBytes  14.9 Gbits/sec    0    631 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  25.8 GBytes  22.1 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  25.8 GBytes  22.1 Gbits/sec                  receiver
It starts high, but then goes down to the usual 14GB/s.

But this doesn't happen on any Intel based system. Im so sick, first i found here a limitation, then in PBS on another thread...
Im getting tired of find the root cause.
And i tested if it has something todo with Hyperthreading, so i disabled SMT, but no difference in the behaviour. Setted scaling governor to Performance, no difference. Tryed different "Tuning Profiles" in the Genoa Bios, like Workload Optimized by Clockspeed or HPC, no difference.
Setted cTDP up to 400W and overclocked the 9374F to 4,5GHZ, no difference.
Basically tryed now all bios settings, nothing helps.

I only got a slight speed bump from ~14GB/s to almost 15GB/s, due to HPC Profile and Overclocking, lol.

The best part is, when it runs with ~38GB/s then 2-4 Cores get utilized. When it runs with 14GB/s only one Core gets utilized.
And im Talking just of one stream iperf3. And it doesn't matters if node/lxc or lxc/lxc testing. Always the same behaviour. And i have no influence on that decision.

Cheers
 
Code:
iperf3 -c 172.17.1.131 -P4
Connecting to host 172.17.1.131, port 5201
[  5] local 172.17.1.132 port 48106 connected to 172.17.1.131 port 5201
[  7] local 172.17.1.132 port 48108 connected to 172.17.1.131 port 5201
[  9] local 172.17.1.132 port 48114 connected to 172.17.1.131 port 5201
[ 11] local 172.17.1.132 port 48130 connected to 172.17.1.131 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  5.46 GBytes  46.7 Gbits/sec    0    468 KBytes       
[  7]   0.00-1.00   sec  3.70 GBytes  31.6 Gbits/sec    0    512 KBytes       
[  9]   0.00-1.01   sec  5.38 GBytes  46.0 Gbits/sec    0    515 KBytes       
[ 11]   0.00-1.01   sec  3.68 GBytes  31.4 Gbits/sec    0    468 KBytes       
[SUM]   0.00-1.00   sec  18.2 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   1.00-2.00   sec  5.41 GBytes  46.7 Gbits/sec    0    468 KBytes       
[  7]   1.00-2.00   sec  3.62 GBytes  31.2 Gbits/sec    0    512 KBytes       
[  9]   1.01-2.00   sec  5.39 GBytes  46.5 Gbits/sec    0    515 KBytes       
[ 11]   1.01-2.00   sec  3.67 GBytes  31.7 Gbits/sec    0    468 KBytes       
[SUM]   1.00-2.00   sec  18.1 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   2.00-3.00   sec  5.44 GBytes  46.7 Gbits/sec    0    468 KBytes       
[  7]   2.00-3.00   sec  3.63 GBytes  31.2 Gbits/sec    0    512 KBytes       
[  9]   2.00-3.00   sec  5.42 GBytes  46.5 Gbits/sec    0    515 KBytes       
[ 11]   2.00-3.00   sec  3.68 GBytes  31.7 Gbits/sec    0    468 KBytes       
[SUM]   2.00-3.00   sec  18.2 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   3.00-4.00   sec  5.44 GBytes  46.7 Gbits/sec    0    468 KBytes       
[  7]   3.00-4.00   sec  3.63 GBytes  31.2 Gbits/sec    0    512 KBytes       
[  9]   3.00-4.00   sec  5.41 GBytes  46.5 Gbits/sec    0    515 KBytes       
[ 11]   3.00-4.00   sec  3.69 GBytes  31.7 Gbits/sec    0    468 KBytes       
[SUM]   3.00-4.00   sec  18.2 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   4.00-5.00   sec  5.43 GBytes  46.6 Gbits/sec    0    468 KBytes       
[  7]   4.00-5.00   sec  3.63 GBytes  31.2 Gbits/sec    0    512 KBytes       
[  9]   4.00-5.00   sec  5.41 GBytes  46.4 Gbits/sec    0    515 KBytes       
[ 11]   4.00-5.00   sec  3.69 GBytes  31.7 Gbits/sec    0    468 KBytes       
[SUM]   4.00-5.00   sec  18.1 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   5.00-6.00   sec  5.43 GBytes  46.6 Gbits/sec    0    468 KBytes       
[  7]   5.00-6.00   sec  3.63 GBytes  31.2 Gbits/sec    0    512 KBytes       
[  9]   5.00-6.00   sec  5.40 GBytes  46.4 Gbits/sec    0    515 KBytes       
[ 11]   5.00-6.00   sec  3.69 GBytes  31.7 Gbits/sec    0    496 KBytes       
[SUM]   5.00-6.00   sec  18.1 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   6.00-7.00   sec  5.40 GBytes  46.4 Gbits/sec    0    468 KBytes       
[  7]   6.00-7.00   sec  3.65 GBytes  31.3 Gbits/sec    0    512 KBytes       
[  9]   6.00-7.00   sec  5.40 GBytes  46.4 Gbits/sec    0    515 KBytes       
[ 11]   6.00-7.00   sec  3.71 GBytes  31.9 Gbits/sec    0    520 KBytes       
[SUM]   6.00-7.00   sec  18.2 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   7.00-8.00   sec  5.40 GBytes  46.3 Gbits/sec    0    468 KBytes       
[  7]   7.00-8.00   sec  3.65 GBytes  31.4 Gbits/sec    0    512 KBytes       
[  9]   7.00-8.00   sec  5.40 GBytes  46.4 Gbits/sec    0    515 KBytes       
[ 11]   7.00-8.00   sec  3.72 GBytes  32.0 Gbits/sec    0    520 KBytes       
[SUM]   7.00-8.00   sec  18.2 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   8.00-9.00   sec  5.40 GBytes  46.4 Gbits/sec    0    468 KBytes       
[  7]   8.00-9.00   sec  3.67 GBytes  31.5 Gbits/sec    0    512 KBytes       
[  9]   8.00-9.00   sec  5.39 GBytes  46.3 Gbits/sec    0    515 KBytes       
[ 11]   8.00-9.00   sec  3.73 GBytes  32.0 Gbits/sec    0    520 KBytes       
[SUM]   8.00-9.00   sec  18.2 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   9.00-10.00  sec  5.39 GBytes  46.3 Gbits/sec    0    468 KBytes       
[  7]   9.00-10.00  sec  3.66 GBytes  31.4 Gbits/sec    0    512 KBytes       
[  9]   9.00-10.00  sec  5.39 GBytes  46.3 Gbits/sec    0    515 KBytes       
[ 11]   9.00-10.00  sec  3.71 GBytes  31.9 Gbits/sec    0    520 KBytes       
[SUM]   9.00-10.00  sec  18.2 GBytes   156 Gbits/sec    0             
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  54.2 GBytes  46.5 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  54.2 GBytes  46.5 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  36.5 GBytes  31.3 Gbits/sec    0             sender
[  7]   0.00-10.00  sec  36.5 GBytes  31.3 Gbits/sec                  receiver
[  9]   0.00-10.00  sec  54.0 GBytes  46.4 Gbits/sec    0             sender
[  9]   0.00-10.00  sec  54.0 GBytes  46.4 Gbits/sec                  receiver
[ 11]   0.00-10.00  sec  37.0 GBytes  31.8 Gbits/sec    0             sender
[ 11]   0.00-10.00  sec  37.0 GBytes  31.8 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec   182 GBytes   156 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec   182 GBytes   156 Gbits/sec                  receiver

Mystery Solved!
Cheers
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!