Two 1 Gb NIC for lb and doubling speed

xvegax

Well-Known Member
Aug 12, 2019
38
0
46
43
Hi guys,

I am building up my server with two NIC (Intel Gigabit adapter) this week and will use the newest Proxmox version for installation. Eventually, the server will be positioned in a data center behind a Cisco or Huawei switch with two 1 Gb ports, which I will have configuration access to.

Now, I can choose between

1. getting two public IP addresses and use one per NIC

or

2. just use one public IP and and create a bond with LACP to get double the in-/output and some load-balancing.

What does make more point regarding efficiency? Is option 2. a viable and working scenario, anyway?

The reason I ask, at all, is the fact, that I found not a lot of information regarding this scenario and also in a very old thread from 2012 or so in a different forum I read something about LACP not really working out on a Proxmox server.

Does anyone have experience with this setup or some knowledge about it? What are your recommendations in general?

Thank you in advance.
 
Last edited:
Doing a LACP bond configuration gives you the same potential throughput as 2 independent links. However a bond also provides failure tolorance, so if one port or cable dies, all services are still up.

You can also still use multiple public IP addresses, they will simply be configured on the host.
PLEASE tell me you are not simply putting your Proxmox Management Interface on an open port to the internet though.... that would be a bad idea.

As far as your concerns about reliability, Both Linux bonding and OVS bonding are fantastically stable. I have been running both in production for a couple years now without a single issue.
 
  • Like
Reactions: xvegax
Doing a LACP bond configuration gives you the same potential throughput as 2 independent links. However a bond also provides failure tolorance, so if one port or cable dies, all services are still up.

You can also still use multiple public IP addresses, they will simply be configured on the host.
PLEASE tell me you are not simply putting your Proxmox Management Interface on an open port to the internet though.... that would be a bad idea.

As far as your concerns about reliability, Both Linux bonding and OVS bonding are fantastically stable. I have been running both in production for a couple years now without a single issue.

So, a LACP bond will not give load-balancing and double the speed, just a fail-safety. That is nice, but still what a pity. Is there another way to use those two NIC together to utilise double the speed?

Nope, I have tree NIC: Two for the public IP (LACP bond) and one for the private IP (Proxmox management).
 
So, a LACP bond will not give load-balancing and double the speed, just a fail-safety.

Incorrect. An "Active-backup" link (or 2 independent links) will operate this way.

An LACP bond will give you more throughput equal to the capacity of all bonded links. (Hence the name: link aggregation control protocol) The bond will then connect to a bridge, vswitch, or internal interface depending on your configuration settings. LACP bonds also fail safely in the case one of the links fails, just at a reduced total capacity.
 
  • Like
Reactions: xvegax
Alright, I have two more questions, but I did not want to open up another post just for that.

As mentioned earlier, I am using three NICs on the machine - one for internal connection, meaning the Proxmox interface and two for LACP bonding, meaning the internet access.

1. How can I restrict the Proxmox interface to not be able to be accessed by the two external interfaces?

2. When using a /29 network I have eight IPs. Substracting net id, broadcast, gateway and first usable host ip bound on those two external interfaces, I will have only four IPs left to use for the VMs, correct? Is there a way to use more VMs with only those four IPs? Do I have to NAT then or is there an easier approach?

Thank you for your help, guys.
 
Are you using Linux bonding, or OpenV-Switch bonding?

The proxmox interface is only available on interfaces it has a valid IP address for.
If you are using Linux Bridges/Bond, then you will need to make a separate bridge with the LAN ip address for proxmox Host.

If you are using OpenVswitch, then you will want to edit your Internal Interface to specify what network bridge the Proxmox host is connected to.

In your case, it sounds like the host has it's own network interface, which means it likely does not have a valid IP address on the bonded NIC's, so you may already be configured like you are expecting.

If you need more flexibility as to who can and cannot connect, you could also configure ACL's on your firewall (if you have access to it, or the proxmox firewall (((BE CAREFUL WITH THIS)))) to only permit specific ip/networks to connect to the web interface/ssh/ftp/ etc.
 
  • Like
Reactions: xvegax
Well, I thought about using Linux bonding with this configuration:

root@server:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
address 172.20.16.22
netmask 255.255.255.248
gateway 172.20.16.17
bridge_ports enp3s0
bridge_stp off
bridge_fd 0

auto enp4s0
iface enp4s0 inet manual
bond-master bond0

auto eno1
iface eno1 inet manual
bond-master bond0

auto bond0
iface bond0 inet static
address 200.200.200.114
gateway 200.200.200.113
netmask 255.255.255.248
bond-mode 4
bond-miimon 100
bond-lacp-rate 1
bond-slaves enp4s0 eno1

I hope, this configuration is without errors.
 
In this case, Proxmox's Web UI would be available on any of these interfaces, since you are using a Linux Bridge that has IP addressing, and a bond interface that also has ip addressing.

The way to prevent access to the web UI would be to enable the firewall on the proxmox host, and block unwanted management traffic on the 200.200.200.X networks (I.E. only allowing it on the 172 network). Keep in mind, that you will need explicit access to that network in the future if you co-locate the machine. A VPN could work, but also know that VPN's can (AND do!) fail and sometimes need to be restarted. You don't want to lock yourself out if this machine is in a remote datacenter.

If possible, you could also remove the IP addressing on the Bond0. You do not need an IP address on the Bond interface. You could instead attach it to another bridge, that is also IP-less since bridges act like a switch, and do not need addressing in order to do layer 2 networking. If there is no IP address, then there is no way for anything above layer 2 to connect to the host machine. simply assign the management IP to the other bridge and make the attached physical management nic the only member of the addressed bridge. this would give you a Physical cable to use for management on a completely separate network from the Proxmox guest's VM network. You can even vlan it if you're apt to do so.

If this is the case, i would highly recommend looking into OpenVswitch on the proxmox wiki. It can provide you some more configuration guidance without needing to multi-bridge and would also allow you to simply manage proxmox access through a simple firewall rule regarding an openvswitch internal port.

You can also limit this at a network firewall that is not your proxmox host, if you have access to that. This would be done in the form of an ACL or Firewall rule on a device between your proxmox host and the outside world. This may not be available to you in a colocation space however.
 
  • Like
Reactions: guletz and xvegax
A VPN could work, but also know that VPN's can (AND do!) fail and sometimes need to be restarted. You don't w



Good point.

You could also restrict in your firewall the access to proxmox web interface from a several fixed IP that you use for management. Or maybe a port-knoking rules can also do the job if you use a dynamic IP address.

Good luck!
 
Last edited:
The way to prevent access to the web UI would be to enable the firewall on the proxmox host, and block unwanted management traffic on the 200.200.200.X networks (I.E. only allowing it on the 172 network). Keep in mind, that you will need explicit access to that network in the future if you co-locate the machine. A VPN could work, but also know that VPN's can (AND do!) fail and sometimes need to be restarted. You don't want to lock yourself out if this machine is in a remote datacenter.

If possible, you could also remove the IP addressing on the Bond0. You do not need an IP address on the Bond interface. You could instead attach it to another bridge, that is also IP-less since bridges act like a switch, and do not need addressing in order to do layer 2 networking. If there is no IP address, then there is no way for anything above layer 2 to connect to the host machine. simply assign the management IP to the other bridge and make the attached physical management nic the only member of the addressed bridge. this would give you a Physical cable to use for management on a completely separate network from the Proxmox guest's VM network. You can even vlan it if you're apt to do so.

Alright, I will use Debian's or Proxmox's firewall then and just forbid access to the WebUI on the host. Does this affect the guests/VMs, too?

Are you saying I can spare one of my precious six IP addresses? :D No, seriously, how do the guests/VMs communicate with the outside world, if the outgoing two interfaces (bonding) do not have an IP? I thought, every interface (or bonding) needs an IP to reach the switch/gateway.

Do you mean with "bridge" an actual old-school hardware bridge? I doubt, I am allowed to bring that into the data center. I will just get access to the three ports on the switch, one for internal, two for external communication.
 
You will want to use Proxmox's firewall, which will handle all the rulesets for you once you configure it in the interface.

> No, seriously, how do the guests/VMs communicate with the outside world, if the outgoing two interfaces (bonding) do not have an IP? I thought, every interface (or bonding) needs an IP to reach the switch/gateway.

Your guests will have IP addresses, and then you will either have to assign them one of your public IP addresses, or NAT them behind one or more IP addresses. You will likely need to have some kind of router, or virtual router (like PF or PFsense) to manage these routes if you plan on having more guests than public IP's.

Your interfaces do not need to have IP addresses, because the guests have IP addresses. The outgoing physical links are simply trunk lines from your Linux bridge (AKA, a Virtual Switch) to the Physical Switch wherever your server is located. Even your linux bridge does not require an IP address because it can operate as a Layer 2 switch which only communicates on MAC address, not IP address at layer 3.

Think of it this way: Your interfaces are not trying to reach the gateway. The machines that communicate over those links are. The links have no need for an IP address because you aren't trying to communicate to them. You're trying to communicate to whatever is on either side of them.

> Do you mean with "bridge" an actual old-school hardware bridge?

No, i mean the Linux bridge. Yours is currently called "vmbr0". You can have more of these to segment network traffic the same way you could have more switches stacked in a rack.

This is equivalent to the vswitch part of OpenVswitch (Which i still think you should check out). Think of it as a literal switch inside your proxmox realm that all your virtual machines are "plugged into", which also connects to the ports on the physical server, or groups of ports as a LACP bond is "turning multiple ports into a single 'port' "
 
Last edited:
  • Like
Reactions: xvegax and guletz
Alright, the server was placed now in the data center behind a Huawei switch. The switch's configuration is this:

Code:
interface Eth-Trunk10
description #### Server ####
port link-type trunk
port trunk allow-pass vlan 167
mode lacp
load-balance src-dst-mac


Bond0 status:

Code:
root@rakete:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (round-robin)
MII Status: down
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0


That is the network configuration (1 NIC for management (vmbr0), 2 NIC for internet access (bond0)):

Code:
root@rakete:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address x.x.x.b
        netmask 255.255.255.248
        gateway x.x.x.a
        bridge_ports enp3s0
        bridge_stp off
        bridge_fd 0

auto enp4s0
iface enp4s0 inet manual
bond-master bond0

auto eno1
iface eno1 inet manual
bond-master bond0

auto bond0.167
iface bond0.167 inet static
address a.a.a.b
netmask 255.255.255.255
bond-mode 4
bond-miimon 100
bond-lacp-rate 1
bond-slaves enp4s0 eno1
vlan-raw-device bond0


Btw, if I remove the IP address from bond1, it will state me the error code, that it needs to have an IP address to come up. Just saying, because it was mentioned, there is no need for an IP address to be bound on the interface (bond0).

Status:

Code:
root@rakete:~# service networking status
● networking.service - Raise network interfaces
   Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
   Active: active (exited) since Mon 2019-09-23 11:40:49 CEST; 4min 15s ago
     Docs: man:interfaces(5)
  Process: 9936 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS)
Main PID: 9936 (code=exited, status=0/SUCCESS)

Sep 23 11:40:49 rakete systemd[1]: Starting Raise network interfaces...
Sep 23 11:40:49 rakete ifup[9936]: /etc/network/if-pre-up.d/ifenslave: 19: echo: echo: I/O error
Sep 23 11:40:49 rakete ifup[9936]: /etc/network/if-pre-up.d/ifenslave: 47: /etc/network/if-pre-up.d/ifenslave: cannot create /sys/class/net/bond0.167/bonding/miimon: Directory nonexistent
Sep 23 11:40:49 rakete ifup[9936]: /etc/network/if-pre-up.d/ifenslave: 47: /etc/network/if-pre-up.d/ifenslave: cannot create /sys/class/net/bond0.167/bonding/mode: Directory nonexistent
Sep 23 11:40:49 rakete ifup[9936]: /etc/network/if-pre-up.d/ifenslave: 47: /etc/network/if-pre-up.d/ifenslave: cannot create /sys/class/net/bond0.167/bonding/lacp_rate: Directory nonexistent
Sep 23 11:40:49 rakete ifup[9936]: Failed to enslave enp4s0 to bond0.167. Is bond0.167 ready and a bonding interface ?
Sep 23 11:40:49 rakete ifup[9936]: Failed to enslave eno1 to bond0.167. Is bond0.167 ready and a bonding interface ?
Sep 23 11:40:49 rakete systemd[1]: Started Raise network interfaces.


Trying to bring it up:

Code:
root@rakete:~# ifdown enp4s0 eno1
root@rakete:~# ifup bond0.167
ifup: interface bond0.167 already configured


Can someone tell me, what I have done wrong there? It is a vanilla system with a fresh installation and not much else done.
 
Last edited:
In your vmbr0 section in /etc.network/interfaces:

Code:
        bridge_ports enp3s0

should be

Code:
        bridge_ports bond0

instead.

Remove the IP address on the Bond0 configuration section, as it will be a layer 2 connection to the switch and to vmbr0.


I forgot what mode is LACP as well, but you can do this in the web interface. If your switch is set to LACP, your bond must also be LACP or there will be no communication possible.
 
In your vmbr0 section in /etc.network/interfaces:

Code:
        bridge_ports enp3s0

should be

Code:
        bridge_ports bond0

instead.

Remove the IP address on the Bond0 configuration section, as it will be a layer 2 connection to the switch and to vmbr0.


I forgot what mode is LACP as well, but you can do this in the web interface. If your switch is set to LACP, your bond must also be LACP or there will be no communication possible.

Thanks, but there is a misunderstanding. There a three NIC. One NIC (enp3s0) for the management, the other two NIC (en0, enp4s0) form the bonding and shall be uplink for the VMs, ... etc.

So, vmbr0 is management with enp3s0 and bond1 the uplink with en0, enp4s0.

LACP is mode 4 under bonding.

Also: "Btw, if I remove the IP address from bond1, it will state me the error code, that it needs to have an IP address to come up."
 
Last edited:
You're right, i did misread.

However, you still need to have your bond attached to a bridge, and your bond should not have an IP address. Your bond is a trunk link, not an access layer link. Your bridge will have the option to have an IP address as well, and you can give it an address there.

The error you keep getting is because the bond either needs to be IP addressed, or attached to a bridge.

Check out this wiki page:
https://pve.proxmox.com/wiki/Network_Configuration#_linux_bond
 
Alright, I changed it up like this:

Code:
auto lo
iface lo inet loopback

iface enp3s0 inet manual

auto enp4s0
iface enp4s0 inet manual
    bond-master bond1

auto eno1
iface eno1 inet manual
    bond-master bond1

auto bond1
iface bond1 inet manual
    bond-slaves eno1 enp4s0
    bond-miimon 100
    bond-mode 802.3ad
    bond-lacp-rate 1

auto bond1.167
iface bond1.167 inet static
    address x.x.x.x
    netmask 255.255.255.248
    vlan-raw-device bond1

auto vmbr0
iface vmbr0 inet static
    address  a.a.a.b
    netmask  255.255.255.248
    gateway a.a.a.a
    bridge-ports enp3s0
    bridge-stp off
    bridge-fd 0

But still I am using vmbr0 for the management interface. So, at a later point I will have to change it like this:

Code:
auto lo
iface lo inet loopback

iface enp3s0 inet static
    address  a.a.a.b
    netmask  255.255.255.248

auto enp4s0
iface enp4s0 inet manual
    bond-master bond1

auto eno1
iface eno1 inet manual
    bond-master bond1

auto bond1
iface bond1 inet manual
    bond-slaves eno1 enp4s0
    bond-miimon 100
    bond-mode 802.3ad
    bond-lacp-rate 1

auto bond1.167
iface bond1.167 inet manual
    vlan-raw-device bond1

auto vmbr0
iface vmbr0 inet static
    address x.x.x.x
    netmask 255.255.255.248
    gateway x.x.x.y
    bridge-ports bond1.167
    bridge-stp off
    bridge-fd 0

Because I need the bonding with VLAN 167 to be able to transfer data and be the bridge to the VMs, while I still have the iface enp3s0 for management, correct? Or must the vmbr0 interface be called something like vmbr0.167?

Will I have to use two gateways and two routing tables then?
 
you can name your vmbr interfaces anything you want, but in order for them to show up in the proxmox web interface it must be something like vmbr0, vmbr1, vmbr2, etc.

you can also make multiple vmbr interfaces, so one for management and another for vm's that trunk up the bonded interface.

The .XXX after your interfaces is not required. that's a cisco-ism that is only related to cisco devices. you enable vlans in the vlan config section on the interface or virtual bridge interface. The name in the config is simply to identify the device in ifconfig.
 
Unfortunately, I have now this problem: https://forum.proxmox.com/threads/proxmox-6-and-ryzen-3700x-x570.58359/

I found out the connectivity problems I had, also my issue with the bonding seem to be the result of using a bridge (vmbr0).

When I removed it and put the IP configuration directly on enp3s0, the problems stopped ... and now I do not know how to use the VMs/Guests over a bonding bridge, if a bridge is not possible ... or what is even causing the trouble on those bridges?
 
The problem is not with your hardware. The problem is that you are failing to understand networking, and following the instructions in the documentation.

Let's see if this helps you see where you are going wrong:

I just made a brand new proxmox server, and configured it with a bond interface of 2 links, and a third separate management interface, just like you are trying to do. The names are slightly different, and so are the IP addresses, but you can get an idea for how this all comes together.

I have also added large comment blocks to describe what is going on here.

Code:
# Loopback virtual interface
auto lo
iface lo inet loopback

#Ports on back of server
iface enp6s0f0 inet manual
iface enp6s0f1 inet manual

#Proxmox Ethernet management interface.
#This is simply a single interface {eno1} 
#Which is the ethernet jack 1 on the back of the server. 

auto eno1
iface eno1 inet static
        address  172.23.45.67
        netmask  24



#Bond interface for trunking to switch.
#Notice that the bond has no IP address, as it is not an interface? 
#it is just a trunk for all traffic between the switch and the virtual switch {vmbr0}
#Even though there are multiple physical connectors {enp6s0f0 enp6s0f1}
#Bond0 is recognized as a SINGLE phtsical port.

auto bond0
iface bond0 inet manual
        bond-slaves enp6s0f0 enp6s0f1
        bond-miimon 100
        bond-mode 802.3ad




#Bridge interface for VM's, with bond connected for trunking
#The bridge has one "physical" interface attached, {bond0}

auto vmbr0
iface vmbr0 inet static
        address  10.0.23.12
        netmask  16
        gateway  10.0.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

Does this make more sense to you?
 
First, I will ignore the arrogant and untrue shit in your first statement, alright?

Second, I still thank you for your effort, but you are just plain wrong about the issue here: I have the two interfaces (one NIC for management and two NIC for bonding) working and everything is/both are fine, AS LONG AS I DO NOT use any bridge like vmbr0 or vmbr1.

In other words, I did get the bonding working, also using the documentation, but ...

... as soon as I configure a bridge, both interfaces have problems and lose connections and yes, I mean both, even the one which is not configured in any bridge.

Just to be clear again ...

Code:
auto enp3s0
iface enp3s0 inet static
    address  a.a.a.b
    netmask  255.255.255.248
    gateway a.a.a.a

... alone works fine, but ...

Code:
auto enp3s0
iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address a.a.a.b
    netmask 255.255.255.248
    gateway a.a.a.a
    bridge-ports enp3s0
    bridge-stp off
    bridge-fd 0

... loses the connection several times per hour and blocks the connection then for five to 60 minutes.

So, I left the management without a bridge (removed vmbr0) in the way it was working fine, see above, and I had only one connection loss in 24 hours.

Yesterday then I put the bonding behind vmbr1 exactly the same way the documentation says and booom ... the problem is back again, even affecting interface enp3s0, which is not in a bridge, as just mentioned.

So, as soon as a bridge is involved, something is going wrong all over the network (settings).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!