LACP not sure if its working.

Hurtz1234

Member
Oct 8, 2024
34
5
8
Im a little bit confused how to verify that my LACP is really running.

I get following warning: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond

with Cat i get the following (mac adresses blurred out) for safty):

root@node1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.8.12-4-pve

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address:
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 23
Partner Key: 32768
Partner Mac Address:

Slave Interface: enp9s0
MII Status: up
Speed: 40000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr:
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address:
port key: 23
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address:
oper key: 32768
port priority: 32768
port number: 269
port state: 61

Slave Interface: enp9s0d1
MII Status: up
Speed: 40000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr:
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address:
port key: 23
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address:
oper key: 32768
port priority: 32768
port number: 265
port state: 61

So here everything looks good but with dmesg |grep bond, i get some errors:

[ 579.680735] bond0: (slave enp9s0): Enslaving as a backup interface with an up link
[ 579.701800] bond0: (slave enp9s0d1): Enslaving as a backup interface with an up link
[ 579.811790] bond0: (slave enp9s0): speed changed to 0 on port 1
[ 579.812271] bond0: (slave enp9s0d1): speed changed to 0 on port 2
[ 579.814008] bond0: (slave enp9s0): link status definitely down, disabling slave
[ 579.814025] bond0: (slave enp9s0d1): link status definitely down, disabling slave
[ 581.690879] bond0: (slave enp9s0): link status definitely up, 40000 Mbps full duplex
[ 581.690888] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
[ 581.691105] bond0: (slave enp9s0d1): link status definitely up, 40000 Mbps full duplex
[ 581.691110] bond0: active interface up!

Also i tested a little bit with iperf3. I get a little bit overf 42 gbit when i connect from several nodes but i never get any close to 80gbit. Any ideas?
 
Last edited:
This is from my switch, i have 5 nodes and it looks also on the switcht, that they are up (SU)
Switch1# show port-channel summary
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
b - BFD Session Wait
S - Switched R - Routed
U - Up (port-channel)
p - Up in delay-lacp mode (member)
M - Not in use. Min-links not met
--------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
--------------------------------------------------------------------------------
1 Po1(SU) Eth LACP Eth1/3(P) Eth1/4(P)
2 Po2(SD) Eth LACP Eth1/5(D) Eth1/6(D)
3 Po3(SD) Eth LACP Eth1/7(D) Eth1/8(D)
4 Po4(SD) Eth LACP Eth1/9(D) Eth1/10(D)
5 Po5(SU) Eth LACP Eth1/11(P) Eth1/12(P)
6 Po6(SD) Eth LACP Eth1/13(D) Eth1/14(D)
7 Po7(SD) Eth LACP Eth1/15(D) Eth1/16(D)
8 Po8(SD) Eth LACP Eth1/17(D) Eth1/18(D)
9 Po9(SU) Eth LACP Eth1/19(P) Eth1/20(P)
10 Po10(SD) Eth LACP Eth1/21(D) Eth1/22(D)
11 Po11(SD) Eth LACP Eth1/23(D) Eth1/24(D)
12 Po12(SU) Eth LACP Eth1/25(P) Eth1/26(P)
13 Po13(SD) Eth LACP Eth1/27(D) Eth1/28(D)
14 Po14(SD) Eth LACP Eth1/29(D) Eth1/30(D)
15 Po15(SU) Eth LACP Eth1/31(P) Eth1/32(P)
16 Po16(SD) Eth NONE --
Switch1# show lacp counters
NOTE: Clear lacp counters to get accurate statistics

------------------------------------------------------------------------------
LACPDUs Markers/Resp LACPDUs
Port Sent Recv Recv Sent Pkts Err
------------------------------------------------------------------------------
port-channel1
Ethernet1/3 882 768 0 0 0
Ethernet1/4 882 770 0 0 0

port-channel5
Ethernet1/11 872 762 0 0 0
Ethernet1/12 873 761 0 0 0

port-channel9
Ethernet1/19 857 762 0 0 0
Ethernet1/20 857 763 0 0 0

port-channel12
Ethernet1/25 879 772 0 0 0
Ethernet1/26 879 770 0 0 0

port-channel15
Ethernet1/31 872 763 0 0 0
Ethernet1/32 872 763 0 0 0

Switch1# show interface status



--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
Eth1/1 -- xcvrAbsen 1 auto auto --
Eth1/2 -- xcvrAbsen 1 auto auto --
Eth1/3 -- connected trunk full 40G QSFP-40G-CR4
Eth1/4 -- connected trunk full 40G QSFP-40G-CR4
Eth1/5 -- xcvrAbsen trunk auto auto --
Eth1/6 -- xcvrAbsen trunk auto auto --
Eth1/7 -- xcvrAbsen trunk auto auto --
Eth1/8 -- xcvrAbsen trunk auto auto --
Eth1/9 -- xcvrAbsen trunk auto auto --
Eth1/10 -- xcvrAbsen trunk auto auto --
Eth1/11 -- connected trunk full 40G QSFP-40G-CR4
Eth1/12 -- connected trunk full 40G QSFP-40G-CR4
Eth1/13 -- xcvrAbsen trunk auto auto --
Eth1/14 -- xcvrAbsen trunk auto auto --
Eth1/15 -- xcvrAbsen trunk auto auto --
Eth1/16 -- xcvrAbsen trunk auto auto --
Eth1/17 -- xcvrAbsen trunk auto auto --
Eth1/18 -- xcvrAbsen trunk auto auto --
Eth1/19 -- connected trunk full 40G QSFP-40G-CR4
Eth1/20 -- connected trunk full 40G QSFP-40G-CR4
Eth1/21 -- xcvrAbsen trunk auto auto --
Eth1/22 -- xcvrAbsen trunk auto auto --
Eth1/23 -- xcvrAbsen trunk auto auto --
Eth1/24 -- xcvrAbsen trunk auto auto --
Eth1/25 -- connected trunk full 40G QSFP-40G-CR4
Eth1/26 -- connected trunk full 40G QSFP-40G-CR4
Eth1/27 -- xcvrAbsen trunk auto auto --
Eth1/28 -- xcvrAbsen trunk auto auto --
Eth1/29 -- xcvrAbsen trunk auto auto --
Eth1/30 -- xcvrAbsen trunk auto auto --
Eth1/31 -- connected trunk full 40G QSFP-40G-CR4
Eth1/32 -- connected trunk full 40G QSFP-40G-CR4
Po1 -- connected trunk full 40G --
Po2 -- noOperMem trunk auto auto --
Po3 -- noOperMem trunk auto auto --
Po4 -- noOperMem trunk auto auto --
Po5 -- connected trunk full 40G --
Po6 -- noOperMem trunk auto auto --
Po7 -- noOperMem trunk auto auto --
Po8 -- noOperMem trunk auto auto --
Po9 -- connected trunk full 40G --
Po10 -- noOperMem trunk auto auto --
Po11 -- noOperMem trunk auto auto --
Po12 -- connected trunk full 40G --
Po13 -- noOperMem trunk auto auto --
Po14 -- noOperMem trunk auto auto --
Po15 -- connected trunk full 40G --
Po16 -- noOperMem trunk auto auto --
 
Right, testing if a lacp bonding is really working should always be tested properly with data. Best way is doing so with a nfs mount.
Looks like all of your 5 hosts have 2x40Gb bonded so test between between each pair of hosts a time is ok.
Create a big file like 100GB (as with 2x40Gb is theo. 10GB/s) and read into ram "cat /path/100gfile >/dev/null". Export the /path to other host.
Mount on other host "mount -o nconnect=8 host1:/path /net" (choose your names) and "echo 8192 > /sys/class/bdi/$(mountpoint -d /net)/read_ahead_kb" (choose your mountpoint). Run in other shell "sar -n DEV 1 --iface=$(ls /sys/devices/pci*/*/*/net/|grep -v pci|xargs|sed s'/ /,/'g)" and see network interfaces ... and start "cat /net/100gfile >/dev/null". Check your throughput in the shell with sar running.
After ok be shure do test this for each host pair (host 1+2, 1+3, 1+4, 1+5, 2+3, 2+4, 2+5, 3+4, 3+5, 4+5).
 
  • Like
Reactions: Hurtz1234
Thank you for the information. My hard disk is not fast enough. I did a different approche i used iperf 3 but transfering a file from a virtual disk which is laying on my ram. I changed also the Buffer:

Buffer Change:
Code:
sysctl -w net.core.rmem_max=134217728
sysctl -w net.core.wmem_max=134217728
sysctl -w net.ipv4.tcp_rmem="4096 87380 134217728"
sysctl -w net.ipv4.tcp_wmem="4096 65536 134217728"
sysctl -w net.ipv4.tcp_window_scaling=1
sysctl -w net.ipv4.tcp_mtu_probing=1

Mount ramdisk:
Code:
mkdir /mnt/ramdisk
mount -t tmpfs -o size=12G tmpfs /mnt/ramdisk

Create file:
Code:
cd /mnt/ramdisk
dd if=/dev/zero of=large_test_file.bin bs=1M count=10000

Run iperf on server:
Code:
iperf3 -s -p 5001 & iperf3 -s -p 5002 & iperf3 -s -p 5003 & iperf3 -s -p 5004

Run iperf3 on client:
Code:
iperf3 -c <ipv6 adress> -F /mnt/ramdisk/large_test_file.bin -P 5  -w 128M -t 1000 -p 5001

But i never get over 50 gbit:
Code:
  5]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.00 MBytes  
[  8]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0    976 KBytes  
[ 11]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.02 MBytes  
[ 14]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.51 MBytes  
[ 17]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.08 MBytes  
[ 20]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.01 MBytes  
[ 23]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.41 MBytes  
[ 26]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.20 MBytes  
[ 29]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.31 MBytes  
[ 32]   0.00-1.00   sec   562 MBytes  4.71 Gbits/sec    0   1.09 MBytes  
[SUM]   0.00-1.00   sec  5.49 GBytes  47.1 Gbits/sec    0        
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   1.00-2.00   sec   336 MBytes  2.81 Gbits/sec   91   1.13 MBytes  
[  8]   1.00-2.00   sec   335 MBytes  2.81 Gbits/sec  112    680 KBytes  
[ 11]   1.00-2.00   sec   337 MBytes  2.83 Gbits/sec   98   1.27 MBytes  
[ 14]   1.00-2.00   sec   436 MBytes  3.65 Gbits/sec  112    863 KBytes  
[ 17]   1.00-2.00   sec   336 MBytes  2.81 Gbits/sec  105    985 KBytes  
[ 20]   1.00-2.00   sec   336 MBytes  2.82 Gbits/sec   91   1.70 MBytes  
[ 23]   1.00-2.00   sec   387 MBytes  3.25 Gbits/sec  119   1.19 MBytes  
[ 26]   1.00-2.00   sec   363 MBytes  3.04 Gbits/sec   91    628 KBytes  
[ 29]   1.00-2.00   sec   425 MBytes  3.56 Gbits/sec  119   1.23 MBytes  
[ 32]   1.00-2.00   sec   336 MBytes  2.82 Gbits/sec   77   1.86 MBytes  
[SUM]   1.00-2.00   sec  3.54 GBytes  30.4 Gbits/sec  1015        
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   2.00-3.00   sec   246 MBytes  2.07 Gbits/sec  764    506 KBytes  
[  8]   2.00-3.00   sec   285 MBytes  2.39 Gbits/sec  332   1.32 MBytes  
[ 11]   2.00-3.00   sec   416 MBytes  3.50 Gbits/sec  224   1.29 MBytes  
[ 14]   2.00-3.00   sec   274 MBytes  2.30 Gbits/sec  182   1.36 MBytes  
[ 17]   2.00-3.00   sec   256 MBytes  2.15 Gbits/sec  175    793 KBytes  
[ 20]   2.00-3.00   sec   675 MBytes  5.67 Gbits/sec  590   6.21 MBytes  
[ 23]   2.00-3.00   sec   274 MBytes  2.30 Gbits/sec  194    872 KBytes  
[ 26]   2.00-3.00   sec   634 MBytes  5.33 Gbits/sec  290   1.27 MBytes  
[ 29]   2.00-3.00   sec   313 MBytes  2.63 Gbits/sec  433    689 KBytes  
[ 32]   2.00-3.00   sec   378 MBytes  3.18 Gbits/sec  197    706 KBytes  
[SUM]   2.00-3.00   sec  3.66 GBytes  31.5 Gbits/sec  3381        
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   3.00-4.00   sec   539 MBytes  4.52 Gbits/sec  420   1.12 MBytes  
[  8]   3.00-4.00   sec   316 MBytes  2.65 Gbits/sec  403   1.23 MBytes  
[ 11]   3.00-4.00   sec   453 MBytes  3.79 Gbits/sec  203   1.21 MBytes  
[ 14]   3.00-4.00   sec   276 MBytes  2.32 Gbits/sec  210    567 KBytes  
[ 17]   3.00-4.00   sec   232 MBytes  1.95 Gbits/sec  658    523 KBytes  
[ 20]   3.00-4.00   sec   282 MBytes  2.36 Gbits/sec  123    619 KBytes  
[ 23]   3.00-4.00   sec   377 MBytes  3.16 Gbits/sec  238   1.40 MBytes  
[ 26]   3.00-4.00   sec   277 MBytes  2.32 Gbits/sec  196   1.17 MBytes  
[ 29]   3.00-4.00   sec   643 MBytes  5.39 Gbits/sec  717   6.33 MBytes  
[ 32]   3.00-4.00   sec   382 MBytes  3.20 Gbits/sec  179   1.18 MBytes  
[SUM]   3.00-4.00   sec  3.69 GBytes  31.7 Gbits/sec  3347        
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   4.00-5.00   sec   451 MBytes  3.79 Gbits/sec  165    140 KBytes  
[  8]   4.00-5.00   sec   490 MBytes  4.12 Gbits/sec    0   1.36 MBytes  
[ 11]   4.00-5.00   sec   490 MBytes  4.12 Gbits/sec  229    131 KBytes  
[ 14]   4.00-5.00   sec   309 MBytes  2.59 Gbits/sec  224   34.9 KBytes  
[ 17]   4.00-5.00   sec   470 MBytes  3.95 Gbits/sec    0   1.69 MBytes  
[ 20]   4.00-5.00   sec   490 MBytes  4.12 Gbits/sec    0   1.35 MBytes  
[ 23]   4.00-5.00   sec   490 MBytes  4.12 Gbits/sec    0   1.48 MBytes  
[ 26]   4.00-5.00   sec   474 MBytes  3.98 Gbits/sec    0   1.37 MBytes  
[ 29]   4.00-5.00   sec   490 MBytes  4.12 Gbits/sec    0   6.33 MBytes  
[ 32]   4.00-5.00   sec   490 MBytes  4.12 Gbits/sec    0   1.37 MBytes  
[SUM]   4.00-5.00   sec  4.53 GBytes  39.0 Gbits/sec  618        
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   5.00-6.00   sec   274 MBytes  2.29 Gbits/sec  180    793 KBytes  
[  8]   5.00-6.00   sec   422 MBytes  3.53 Gbits/sec  112   1.16 MBytes  
[ 11]   5.00-6.00   sec   405 MBytes  3.39 Gbits/sec  190   1.26 MBytes  
[ 14]   5.00-6.00   sec   270 MBytes  2.26 Gbits/sec  256    759 KBytes  
[ 17]   5.00-6.00   sec   454 MBytes  3.80 Gbits/sec   89    933 KBytes  
[ 20]   5.00-6.00   sec   450 MBytes  3.77 Gbits/sec  105   1.12 MBytes  
[ 23]   5.00-6.00   sec   451 MBytes  3.78 Gbits/sec   89    889 KBytes  
[ 26]   5.00-6.00   sec   378 MBytes  3.17 Gbits/sec  119   1.17 MBytes  
[ 29]   5.00-6.00   sec   429 MBytes  3.59 Gbits/sec    4   1.16 MBytes  
[ 32]   5.00-6.00   sec   439 MBytes  3.67 Gbits/sec   90    802 KBytes  
[SUM]   5.00-6.00   sec  3.88 GBytes  33.2 Gbits/sec  1234        
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   6.00-7.00   sec   410 MBytes  3.44 Gbits/sec    0   1.09 MBytes  
[  8]   6.00-7.00   sec   481 MBytes  4.04 Gbits/sec    0   1.16 MBytes  
[ 11]   6.00-7.00   sec   533 MBytes  4.47 Gbits/sec    0   1.26 MBytes  
[ 14]   6.00-7.00   sec   296 MBytes  2.49 Gbits/sec    0    881 KBytes  
[ 17]   6.00-7.00   sec   480 MBytes  4.02 Gbits/sec    0   1.18 MBytes  
[ 20]   6.00-7.00   sec   516 MBytes  4.33 Gbits/sec    0   1.12 MBytes  
[ 23]   6.00-7.00   sec   476 MBytes  4.00 Gbits/sec    0   1.14 MBytes  
[ 26]   6.00-7.00   sec   539 MBytes  4.52 Gbits/sec    0   1.17 MBytes  
[ 29]   6.00-7.00   sec   504 MBytes  4.23 Gbits/sec    0   1.16 MBytes  
[ 32]   6.00-7.00   sec   375 MBytes  3.14 Gbits/sec    0    933 KBytes  
[SUM]   6.00-7.00   sec  4.50 GBytes  38.7 Gbits/sec    0        
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.09 MBytes  
[  8]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.16 MBytes  
[ 11]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.26 MBytes  
[ 14]   7.00-8.00   sec   439 MBytes  3.68 Gbits/sec    0    924 KBytes  
[ 17]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.18 MBytes  
[ 20]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.12 MBytes  
[ 23]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.14 MBytes  
[ 26]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.17 MBytes  
[ 29]   7.00-8.00   sec   495 MBytes  4.15 Gbits/sec    0   1.16 MBytes  
[ 32]   7.00-8.00   sec   465 MBytes  3.90 Gbits/sec    0    976 KBytes  
[SUM]   7.00-8.00   sec  4.75 GBytes  40.8 Gbits/sec    0

Here is writing from istat:
Code:
   enp8s0              enp9s0             enp9s0d1             vmbr0               bond0    
 KB/s in  KB/s out   KB/s in  KB/s out   KB/s in  KB/s out   KB/s in  KB/s out   KB/s in  KB/s out
    0.39      1.07      0.00      0.00  3.89e+06   7684.43      0.33      1.07  3.89e+06   7684.43
    1.53      2.53      0.00      0.12  2.99e+06   5346.64      1.46      2.53  2.99e+06   5346.93
    1.86      2.16      0.00      0.00  3.05e+06   5114.59      1.76      2.16  3.05e+06   5114.42
    2.31      2.50  2.41e+06   6128.49  2.23e+06   5452.19      2.12      2.50  4.64e+06  11580.59
    4.40      4.98  3.51e+06   5964.02  384649.3   1502.44      4.22      4.93  3.89e+06   7466.54
    0.86      0.17  4.87e+06   9090.92      0.00   4754.48      0.74      0.17  4.87e+06  13845.40
    1.66      1.57  3.84e+06  12199.57  1.71e+06   5840.51      1.58      1.57  5.55e+06  18040.25
    1.95      2.91  3.53e+06  10182.52  2.22e+06   8820.98      1.79      2.91  5.75e+06  19003.85
    3.52      3.45  2.97e+06  10068.71  3.09e+06  13354.73      3.14      3.45  6.06e+06  23423.31
    2.32      1.07  3.12e+06  14471.00  2.98e+06  10734.96      2.03      1.07  6.10e+06  25206.00
 
Last edited:
  • Like
Reactions: liptech
Don't know your approach iperf with file (and use iperf just as a very first hw test) but you may limited to 100% core performance like "cat /dev/zero|ssh othernode cat - >/dev/null" (see top) but a combination of cat with nfs uses really low core usage.
As I wrote load the file into ram then with nfs to /deV/null, so there is no disk access involved, it's just big ram neccessary and I didn't thought your are small in ram when using big 40Gb network ... so make the file smaller to fir in your ram and you must just faster look to running sar output.
 
I tried with nfs but i have issues. Looks like account issues. Im not into depp in nfs.

But i searched for something simmilar you recommand. I tried it with this command but it get really slow ony 500mb/s
dd if=/dev/zero bs=4096 count=1048576 | ssh root@<ipv6 adress> 'cat > /dev/null'

Core performance is no issue its a ryzen 5600 with 12 cores and it is only at 20 -30% in iperf3 in this it is at 12-16%
 
Last edited:
  • Like
Reactions: liptech
But i searched for something simmilar you recommand. I tried it with this command but it get really slow ony 500mb/s
dd if=/dev/zero bs=4096 count=1048576 | ssh root@<ipv6 adress> 'cat > /dev/null'
With "ssh" you would definitive reach 100% cpu limit and not the network limit !!
 
  • Like
Reactions: UdoB
You want to test if your bonding is working right ? Do your dd with bs=1M pipe to ssh ... und you see definitive cpu limit reached in top. The pve gui has no 1s updates and you got a middle of cpu to time which is incorrect.
 
I could get it to work, but i get not high speed this is max output. It is something with 20 gbits.
Code:
10:55:10 AM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
10:55:11 AM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
10:55:11 AM    enp9s0      0.00  38125.00      0.00   3121.34      0.00      0.00      1.00      0.06
10:55:11 AM  enp9s0d1 267419.00      0.00 2338245.91      0.00      0.00      0.00      1.00     47.89

10:55:11 AM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
10:55:12 AM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
10:55:12 AM    enp9s0      0.00  37106.00      0.00   3056.83      0.00      0.00      2.00      0.06
10:55:12 AM  enp9s0d1 267940.00      0.00 2342804.30      0.00      0.00      0.00      0.00     47.98

10:55:12 AM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
10:55:13 AM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
10:55:13 AM    enp9s0      0.00  18034.00      0.00   1492.42      0.00      0.00      0.00      0.03
10:55:13 AM  enp9s0d1 132893.00      0.00 1161974.95      0.00      0.00      0.00      0.00     23.80
 
Thaats how i have done it.
Code:
Server:
Mdkir /path/

Create file:
dd if=/dev/zero of=/path/100gfile bs=1G count=100

Install nfs:
apt install nfs-kernel-server   (install newest version)
nano /etc/exports
/path *(rw,no_root_squash,sync,no_subtree_check)
exportfs -r
systemctl start nfs-server
systemctl enable nfs-server

Code:
Client:
apt install nfs-kernel-server
mkdir /net
mount -o nconnect=8,clientaddr=[<client ipv6>] [<server ipv6>]:/path /net
echo 8192 > /sys/class/bdi/$(mountpoint -d /net)/read_ahead_kb
cat /net/100gfile > /dev/null

watch with:
apt install sysstat -y
 sar -n DEV 1 --iface=$(ls /sys/devices/pci*/*/*/net/ | grep -v pci | xargs | sed 's/ /,/g')
 
Last edited:
That's quiet slow (~20Gbit). What's your filesystem where you have /path/100gfile ?
What is "pv /path/100gfile >/dev/null" (without network) ?
 
filsystem for testing is the proxmox nvme but it is only pcie 3.0 x1 connection. thats why i did it i with iperf3 and loded the file into the ram.
 
Arc could hurt this a lot instead of having kernel cache which would explain your 2,3 GB/s network measure.
 
Ok i see my issue. My Ram is slower than the Ethernet connection XD
DDR 4 Ram is 52 Gbits :)
 
Last edited:
Oh no, never. But try in parallel - on client :
echo 3 > /proc/sys/vm/drop_caches
for i in {1..6};do cat /net/100gfile &gt; /dev/null & done
 
I startet now from each node your file this gives me 60 -70% thats good i think. But i cant fill it from another node. Singel node is always 20% usage and only on enp9s0

Code:
01:07:22 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
01:07:23 PM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:07:23 PM    enp9s0 193011.00 236793.00  27866.99 1912242.74      0.00      0.00      2.00     39.16
01:07:23 PM  enp9s0d1      1.00 236095.00      0.08 1907764.47      0.00      0.00      0.00     39.07

01:07:23 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
01:07:24 PM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:07:24 PM    enp9s0 192035.00 236838.00  27692.15 1910797.45      0.00      0.00      0.00     39.13
01:07:24 PM  enp9s0d1      1.00 235983.00      0.08 1909575.57      0.00      0.00      0.00     39.11

01:07:24 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
01:07:25 PM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:07:25 PM    enp9s0 188777.00 228973.00  26741.01 1854022.72      0.00      0.00      2.00     37.97
01:07:25 PM  enp9s0d1  30727.00 295318.00   4107.87 2399580.85      0.00      0.00      0.00     49.14

01:07:25 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
01:07:26 PM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:07:26 PM    enp9s0 152611.00 139840.00  21121.31 1127901.00      0.00      0.00      0.00     23.10
01:07:26 PM  enp9s0d1  98207.00 427328.00  14103.34 3456193.37      0.00      0.00      0.00     70.78

01:07:26 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
01:07:27 PM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:07:27 PM    enp9s0  88997.00      0.00  12598.20      0.00      0.00      0.00      0.00      0.26
01:07:27 PM  enp9s0d1  98577.00 457192.00  14383.14 3701801.43      0.00      0.00      0.00     75.81

01:07:27 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
01:07:28 PM    enp1s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:07:28 PM    enp9s0  85513.00      1.00  12323.76      0.08      0.00      0.00      2.00      0.25
01:07:28 PM  enp9s0d1 105554.00 436662.00  15763.96 3486685.29      0.00      0.00      0.00     71.41
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!