Bond connection as main interface

ruffpl

Member
Jan 11, 2020
37
2
13
39
I had install PVE7 with my mobo integrated 1GbE card as main interface. Server is connected to Dlink switch that have also bond options. I added pcie quad 1GbE card, connected by 4 cables to switch and bond them in Dlink configuration. In Proxmox I created bond0 and add 4 ports of quad GbE card to it so part of my plan was done.
I wanted to use quad card with bond connection (instead of integrated) as main Proxmox interface and main for all VMs instead of vmbr0.
I fallowed this post but after that I lost connection and had to do nano /etc/network/interfaces so to get my server working. Now I am back on the begining and dont know how to do it. Can I get some help?
 

Attachments

  • 2021-11-14 22.57.35 192.168.1.100 122bb790d042.png
    2021-11-14 22.57.35 192.168.1.100 122bb790d042.png
    22.6 KB · Views: 24
Last edited:
Create your Linux bond, set your 4 NICs as slaves for that bond, choose the correct bonding mode (LACP/round-robin/active-passive or whatever but also make sure your switch is using the same configuration), bridge that bond to vmbr0 and hit the "apply" button.

In case you don't want to lock yourself out you could temporarily give the enp0s31f6 a a static IP like 192.168.1.254 so your PVE webUI will also be available on that IP if the bond fails.
 
Last edited:
  • Like
Reactions: vesalius
Thanks. I had created something like on the picture but it is still causing problems.

When main cable + 4 extra cables are connected between server and switch I can not connect to PVE. I have to take out cables from switch reset and reset my router it to get connection to Proxmox back (by one cable). My switch is D-link DGS-1210-48/ in Link aggregation settings there are 2 options- LACP and Static. Dont know what is causing problems
 

Attachments

  • 111.png
    111.png
    27.6 KB · Views: 18
Your screenshot shows that you gave your bond0 a IP but it should be without a IP (and especially not a IP that is already used somewhere else).
And your vmbr0 is still bridged to that single NIC and not to the bond,
 
Think I got it working. Now it looks like on picture ( with extra ip adress for integrated NIC).

Another problem is that my mobo have PCIe x8 1slot and PCIe x8 2slot in same color/ both for SLI/Crossfire (and both are used by me) so even if they are in different IOMMU groups when I start VM after couple minutes I loose connection to PVE. Do You think it wil be possible to do bond connection like this if instead one quad card I buy 2 dual Gbe cards but for pcie x1 size ( I got 4 of them and I think they will be in one different group). Cheapest I found is Dell 0FCGN and I am thinking to give it a try. Is something like that will going to work- 2 dual ethernet cards with 2 bonds? If not, is there some way to split these 2 pcie x8 ports so they will be separated from each other?
 

Attachments

  • 2021-11-16 19.12.44 192.168.1.100 c8e95ab418a0.png
    2021-11-16 19.12.44 192.168.1.100 c8e95ab418a0.png
    29.6 KB · Views: 28
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!