Proxmox Network Question

Discussion in 'Proxmox VE: Networking and Firewall' started by starnetwork, Jan 12, 2018.

  1. starnetwork

    starnetwork Member

    Joined:
    Dec 8, 2009
    Messages:
    335
    Likes Received:
    3
    Hi,
    I have Supermicro Mucroblade with 2x 10Gb switches
    the nodes has 2 network connections, each eth connected to one switch
    now, both connections set on Proxmox as bond0 using LACP (802.3ad)
    as good as I know, in this settings I should get 20Gbps total
    but I got only 10Gb
    any reason why?
    Code:
    # iperf3 -c 192.168.0.3
    Connecting to host 192.168.0.3, port 5201
    [  4] local 192.168.0.2 port 55504 connected to 192.168.0.3 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  1.10 GBytes  9.41 Gbits/sec   18    936 KBytes
    [  4]   1.00-2.00   sec  1.09 GBytes  9.37 Gbits/sec    1   1.04 MBytes
    [  4]   2.00-3.00   sec  1.09 GBytes  9.40 Gbits/sec  232    742 KBytes
    [  4]   3.00-4.00   sec  1.09 GBytes  9.37 Gbits/sec    0    973 KBytes
    [  4]   4.00-5.00   sec  1.09 GBytes  9.39 Gbits/sec    0    974 KBytes
    [  4]   5.00-6.00   sec  1.09 GBytes  9.35 Gbits/sec    0    974 KBytes
    [  4]   6.00-7.00   sec  1.09 GBytes  9.40 Gbits/sec   14    853 KBytes
    [  4]   7.00-8.00   sec  1.09 GBytes  9.36 Gbits/sec   50    731 KBytes
    [  4]   8.00-9.00   sec  1.09 GBytes  9.38 Gbits/sec   24    790 KBytes
    [  4]   9.00-10.00  sec  1.09 GBytes  9.35 Gbits/sec    0    919 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  10.9 GBytes  9.38 Gbits/sec  339             sender
    [  4]   0.00-10.00  sec  10.9 GBytes  9.38 Gbits/sec                  receiver
    
    iperf Done.
    network:
    Code:
    auto lo
    iface lo inet loopback
    
    iface enp3s0f0 inet manual
    
    iface enp3s0f1 inet manual
    
    auto bond0
    iface bond0 inet manual
            slaves enp3s0f0 enp3s0f1
            bond_miimon 100
            bond_mode 802.3ad
    
    auto vmbr0
    iface vmbr0 inet static
            address  XXX.XXX.XXX.XXX
            netmask  XXX.XXX.XXX.XXX
            gateway  XXX.XXX.XXX.XXX
            bridge_ports bond0
            bridge_stp off
            bridge_fd 0
    Thanks!
     
  2. BlueLineSwinger

    BlueLineSwinger New Member

    Joined:
    Sep 11, 2017
    Messages:
    24
    Likes Received:
    0
    That's not how LACP and most link aggregations/bonds work. Communication between two individual nodes is not split across both links.

    LACP can be used to expand available bandwidth for many connections between many nodes. For example, if you have many clients hitting up a server. Or to connect up a couple switches.

    It is also good for accommodating a link failure (e.g., one cable breaks, all traffic is sent over the other).
     
  3. starnetwork

    starnetwork Member

    Joined:
    Dec 8, 2009
    Messages:
    335
    Likes Received:
    3
    Thanks for that answer!
    1. I did new test and open listing from 2 nodes, than created test from source node simultaneously to this 2 listing nodes
    and it's split the bandwidth to 5Gb/each and didn't remain on 10Gb for each node

    2. any better Bond Mode for utilizing network traffic and High availability?
    Regards,
     
  4. lweidig

    lweidig Member

    Joined:
    Oct 20, 2011
    Messages:
    101
    Likes Received:
    2
    This is still LACP related and honestly that is the best bonding mode. LACP uses certain pieces of information to determine which link it will use for each connection (and this can be configured). The connection then stays on that link for its entire life assuming the link does not go down. You must have just had the right setup that both of your nodes hit one of the links. As you scale the traffic you should start to see a more even split across the two links.
     
  5. BlueLineSwinger

    BlueLineSwinger New Member

    Joined:
    Sep 11, 2017
    Messages:
    24
    Likes Received:
    0

    1. It's still communication between two individual nodes, so you're going to be limited to the bandwidth of a single link. LACP determines how to direct traffic over the bond's links using the IP or MAC addresses of the two sides, which of course are the same no matter how many sessions are active between the two. However, if you have multiple guests running on each node each has their own unique IPs and MACs, so if they need to talk to each other communication should be split over the two links.

    2. Like @lweidig said LACP is the best and the standard for general link aggregation.
     
  6. starnetwork

    starnetwork Member

    Joined:
    Dec 8, 2009
    Messages:
    335
    Likes Received:
    3
    Thanks for that info!
    any suggestion how can I use both eth connections all the time and not as failover?
    lacp_rate?
    any additional setting that will make this test run as 20Gbps?

    Regards,
     
  7. lweidig

    lweidig Member

    Joined:
    Oct 20, 2011
    Messages:
    101
    Likes Received:
    2
    A single connection will "pick" one of the devices to use and therefore most tests that you run will never exceed 10G. HOWEVER, as mentioned when you have multiple hosts they should start distributing across the links so that the aggregate bandwidth will be 20G. With only a few hosts you should be able to look at stats on your switch and see that there is traffic going across both ports. If not, you may have something setup wrong as LACP is not just a failover solution, it provides load balancing among the ports.
     
  8. starnetwork

    starnetwork Member

    Joined:
    Dec 8, 2009
    Messages:
    335
    Likes Received:
    3
    Hi,
    1. am taking about connection with 2 nodes Simultaneously
    2. it's work via 2 different switches, eth0 via switch1 and eth1 via Switch2
    I try to disable each eth and it's still working, mean network work via both connections...
    any suggestions how can I enjoy this double connection and see 20Gb over the network?

    thanks again for your help!
     
  9. BlueLineSwinger

    BlueLineSwinger New Member

    Joined:
    Sep 11, 2017
    Messages:
    24
    Likes Received:
    0
    LACP and other ethernet bonding/aggregation protocols don't work the way you want/expect them to. You will not get more that 10 Gb between the two nodes when initiating the communication from/to the host OS itself. The number of switches/etc. between the two is irrelevant. The number of simultaneous unique iperf/FTP/SSH/SMB/whatever test sessions run is irrelevant.

    As we've mentioned, you'll only start to see aggregate bandwidth >10 Gb when you have multiple guests on the nodes, each with their own unique virtual NICs and MACs, communicating with each other. But still, no single guest on one node will be able to do >10 Gb to one on the other node.

    So try this: Set up Proxmox on each node, making sure you have NIC bonding properly set up. Then set up a few basic linux guests on each. Initiate the throughput tests between various guests across the two nodes. You should see the total bandwidth utilized cover both links. You may have to play with the LACP settings on Proxmox and/or the switch (e.g., IP and/or MAC hashing).
     
  10. starnetwork

    starnetwork Member

    Joined:
    Dec 8, 2009
    Messages:
    335
    Likes Received:
    3
    Dear BlueLineSwinger,
    thanks for that detailed answer!
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice