Proxmox Network Question

starnetwork

Renowned Member
Dec 8, 2009
422
10
83
Hi,
I have Supermicro Mucroblade with 2x 10Gb switches
the nodes has 2 network connections, each eth connected to one switch
now, both connections set on Proxmox as bond0 using LACP (802.3ad)
as good as I know, in this settings I should get 20Gbps total
but I got only 10Gb
any reason why?
Code:
# iperf3 -c 192.168.0.3
Connecting to host 192.168.0.3, port 5201
[  4] local 192.168.0.2 port 55504 connected to 192.168.0.3 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.10 GBytes  9.41 Gbits/sec   18    936 KBytes
[  4]   1.00-2.00   sec  1.09 GBytes  9.37 Gbits/sec    1   1.04 MBytes
[  4]   2.00-3.00   sec  1.09 GBytes  9.40 Gbits/sec  232    742 KBytes
[  4]   3.00-4.00   sec  1.09 GBytes  9.37 Gbits/sec    0    973 KBytes
[  4]   4.00-5.00   sec  1.09 GBytes  9.39 Gbits/sec    0    974 KBytes
[  4]   5.00-6.00   sec  1.09 GBytes  9.35 Gbits/sec    0    974 KBytes
[  4]   6.00-7.00   sec  1.09 GBytes  9.40 Gbits/sec   14    853 KBytes
[  4]   7.00-8.00   sec  1.09 GBytes  9.36 Gbits/sec   50    731 KBytes
[  4]   8.00-9.00   sec  1.09 GBytes  9.38 Gbits/sec   24    790 KBytes
[  4]   9.00-10.00  sec  1.09 GBytes  9.35 Gbits/sec    0    919 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  10.9 GBytes  9.38 Gbits/sec  339             sender
[  4]   0.00-10.00  sec  10.9 GBytes  9.38 Gbits/sec                  receiver

iperf Done.

network:
Code:
auto lo
iface lo inet loopback

iface enp3s0f0 inet manual

iface enp3s0f1 inet manual

auto bond0
iface bond0 inet manual
        slaves enp3s0f0 enp3s0f1
        bond_miimon 100
        bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
        address  XXX.XXX.XXX.XXX
        netmask  XXX.XXX.XXX.XXX
        gateway  XXX.XXX.XXX.XXX
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

Thanks!
 
That's not how LACP and most link aggregations/bonds work. Communication between two individual nodes is not split across both links.

LACP can be used to expand available bandwidth for many connections between many nodes. For example, if you have many clients hitting up a server. Or to connect up a couple switches.

It is also good for accommodating a link failure (e.g., one cable breaks, all traffic is sent over the other).
 
Thanks for that answer!
1. I did new test and open listing from 2 nodes, than created test from source node simultaneously to this 2 listing nodes
and it's split the bandwidth to 5Gb/each and didn't remain on 10Gb for each node

2. any better Bond Mode for utilizing network traffic and High availability?
Regards,
 
This is still LACP related and honestly that is the best bonding mode. LACP uses certain pieces of information to determine which link it will use for each connection (and this can be configured). The connection then stays on that link for its entire life assuming the link does not go down. You must have just had the right setup that both of your nodes hit one of the links. As you scale the traffic you should start to see a more even split across the two links.
 
Thanks for that answer!
1. I did new test and open listing from 2 nodes, than created test from source node simultaneously to this 2 listing nodes
and it's split the bandwidth to 5Gb/each and didn't remain on 10Gb for each node

2. any better Bond Mode for utilizing network traffic and High availability?
Regards,


1. It's still communication between two individual nodes, so you're going to be limited to the bandwidth of a single link. LACP determines how to direct traffic over the bond's links using the IP or MAC addresses of the two sides, which of course are the same no matter how many sessions are active between the two. However, if you have multiple guests running on each node each has their own unique IPs and MACs, so if they need to talk to each other communication should be split over the two links.

2. Like @lweidig said LACP is the best and the standard for general link aggregation.
 
Thanks for that info!
any suggestion how can I use both eth connections all the time and not as failover?
lacp_rate?
any additional setting that will make this test run as 20Gbps?

Regards,
 
A single connection will "pick" one of the devices to use and therefore most tests that you run will never exceed 10G. HOWEVER, as mentioned when you have multiple hosts they should start distributing across the links so that the aggregate bandwidth will be 20G. With only a few hosts you should be able to look at stats on your switch and see that there is traffic going across both ports. If not, you may have something setup wrong as LACP is not just a failover solution, it provides load balancing among the ports.
 
A single connection will "pick" one of the devices to use and therefore most tests that you run will never exceed 10G. HOWEVER, as mentioned when you have multiple hosts they should start distributing across the links so that the aggregate bandwidth will be 20G. With only a few hosts you should be able to look at stats on your switch and see that there is traffic going across both ports. If not, you may have something setup wrong as LACP is not just a failover solution, it provides load balancing among the ports.
Hi,
1. am taking about connection with 2 nodes Simultaneously
2. it's work via 2 different switches, eth0 via switch1 and eth1 via Switch2
I try to disable each eth and it's still working, mean network work via both connections...
any suggestions how can I enjoy this double connection and see 20Gb over the network?

thanks again for your help!
 
LACP and other ethernet bonding/aggregation protocols don't work the way you want/expect them to. You will not get more that 10 Gb between the two nodes when initiating the communication from/to the host OS itself. The number of switches/etc. between the two is irrelevant. The number of simultaneous unique iperf/FTP/SSH/SMB/whatever test sessions run is irrelevant.

As we've mentioned, you'll only start to see aggregate bandwidth >10 Gb when you have multiple guests on the nodes, each with their own unique virtual NICs and MACs, communicating with each other. But still, no single guest on one node will be able to do >10 Gb to one on the other node.

So try this: Set up Proxmox on each node, making sure you have NIC bonding properly set up. Then set up a few basic linux guests on each. Initiate the throughput tests between various guests across the two nodes. You should see the total bandwidth utilized cover both links. You may have to play with the LACP settings on Proxmox and/or the switch (e.g., IP and/or MAC hashing).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!