Nic Bonding Fun

Discussion in 'Proxmox VE: Networking and Firewall' started by UcANteKMe42, Nov 19, 2018.

Tags:
  1. UcANteKMe42

    UcANteKMe42 New Member

    Joined:
    May 4, 2016
    Messages:
    2
    Likes Received:
    2
    Hey people,

    I was testing bonding thingies with 4 port dell broadcom 5719 nics. There is something weird when I tried to bond 4 port nics in round robin mode. I want to share.

    Using 2 nics connected directly port to port with patch cables.
    3 different server hardware and these kernels.
    Debian 9 4.9.0-7-amd64
    Promox 4.15.18-8-pve

    When I testing with iperf it was working full speed only one direction.

    [ 5] local 10.10.10.2 port 50942 connected with 10.10.10.3 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 5] 0.0- 1.0 sec 445 MBytes 3.73 Gbits/sec
    [ 5] 0.0-10.0 sec 4.38 GBytes 3.76 Gbits/sec
    [ 4] local 10.10.10.2 port 5001 connected with 10.10.10.3 port 39798
    [ 4] 0.0- 1.0 sec 253 MBytes 2.12 Gbits/sec
    [ 4] 0.0-10.0 sec 2.51 GBytes 2.15 Gbits/sec

    Then Ive tried two machines installed Debian 9 Kernel 4.9.0-7-amd64
    Those two worked well both directions around 445 MBytes/s

    Then Ive tried to install proxmox on one of those Debians to see if its Proxmox's problem.
    After first restart and start using Proxmox Kernel, again slow for one direction.
    So its working 445MBytes on both sides when using Debian Kernel but slowing down with Proxmox's

    4 port nic bonding kinda crazy but Im wondering anybody experienced something like this and manage to solve it?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice