[SOLVED] Linux Bond mixed Speed (10G/1G active-backup) not using 10G


New Member
Dec 18, 2019
Ulm, Germany
I'm having a little trouble configuring my network bonds. I'm using a linux bond bond0. This bond is using eno1 and eno3 for communication.

In my setup eno1 is a 10GBit port connected to a 10GBit Switch. eno3 Is a 1GBit Port connected to a 1GBit Switch. Both Switches are connected using a third switch (thats where the internet comes from, they are to provide redundant access too and make sure the cluster always can talk to itself to keep corosync happy).

Since I want bond0 to use eno1 as standard (I want to use my 10GBIT, we do get 10G internet access) I defined eno1 in bond0 as bond-primary.

The problem though, the bond only offers 1G. I checked and ethtool reports eno1 as 10G, eno3 as 1G and bond0 as 1G. (also tested using iperf and scp, capped at ~112MB/s)
How can I make bond0 utilize my 10G network?

-- cat /etc/network/interfaces --
auto lo
iface lo inet loopback

iface eno1 inet manual
#10G Netzwerk

iface eno2 inet manual
#10G Ceph

iface eno3 inet manual
#1G Netzwerk

iface eno4 inet manual
#1G Ceph

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno3
bond-miimon 100
bond-mode active-backup
bond-primary eno1


Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 100baseT/Half 100baseT/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 100baseT/Half 100baseT/Full
Advertised pause frame use: Symmetric Receive-only
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Link partner advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Link partner advertised pause frame use: No
Link partner advertised auto-negotiation: Yes
Link partner advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Twisted Pair
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: g
Wake-on: d
Current message level: 0x00000000 (0)

Link detected: yes

Settings for bond0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Port: Other
Transceiver: internal
Auto-negotiation: off
Link detected: yes
Last edited:

A string (eth0, eth2, etc) specifying which slave is the
primary device. The specified device will always be the
active slave while it is available. Only when the primary is
off-line will alternate devices be used. This is useful when
one slave is preferred over another, e.g., when one slave has
higher throughput than another.

The primary option is only valid for active-backup(1),
balance-tlb (5) and balance-alb (6) mode.

The option in the debian is then called bond_primary.
What is cat /proc/net/bonding/bond0 showing?
root@eris:~# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eno2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: b0:83:fe:cb:dc:16
Slave queue ID: 0

Slave Interface: eno4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: b0:83:fe:cb:dc:1a
Slave queue ID: 0
What is cat /proc/net/bonding/bond0 showing?

it's showing the 1Gbit Interface as active interface, also showing Primary Slave: None, this seems to be the issue
(Please notice, bond1 and bond0 run the "same" config, just using different interfaces)

From cat /etc/network/interfaces

auto bond1
iface bond1 inet static
        netmask  28
        bond-slaves eno2 eno4
        bond-miimon 100
        bond-mode active-backup
        bond_primary eno2
not sure it'll make a difference, but the official debian doc : bond-primary not bond_primary


(note that if you have installed ifupdown2, bond-primary is not yet supported, support will be available really soon)

So, my configuration now includes BOTH, but I think the problem here is us using ifupdown2 - which makes me sad now :(

I've found a workaround I'll try to implement

auto bond0
iface bond0 inet dhcp
        bond-slaves eth0 wlan0
        bond-mode active-backup
        up echo eth0 > /sys/class/net/$IFACE/bonding/primary
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!