Making the most of all 4x NICs on Proxmox

darren2517

Member
Sep 13, 2021
15
0
6
52
Hello everyone,

Many years age I installed FreeNAS onto an old computer I had & didn't really didn't go near it for years. Then during lockdown I decided to upgrade it & came across TrueNAS Core. This got me interested again & I started to watch some videos on Youtube. Then I came across a video by 'networkchuck' on virtualisation & hypervisors & thought 'this is interesting'.. Then I watched some more videos on Proxmox & have since become become a bit obsessed :). Six months ago I didn't even know what a 'homelab' was but, again after watching some more videos I saw that people were using old rack servers & installing the hypervisors on them. I needed a new computer anyway so I searched on eBay & eventually got myself a Dell R710 for less than £400.

I'm new to Proxmox, networking & Linux so please keep it simple as I'm still struggling with all the terminology. However I am willing to learn & have read a lot & watched many videos. I don't want to waste anyones time but would appreciate some high level advice on my setup. My server came with a 4x 1Gb port PCI NIC & I would like to make the most of all 4 ports.

After reading the Proxmox wiki page on Proxmox network configuration (many times) I think that I have an idea on what would be a good choice for me. However I'm really not sure if this will even work. It doesn't matter if I break everything because I can just start over. I have a pve node setup & have installed some VMs & LXC containers.

As my server has 4x NICs I was thinking on creating two linux bonds.. bond0 (with bond-slaves eno1 eno2) & bond1 (using eno3 eno4).

bond0
For bond0 I was going to make it 'active-backup' mode & create a VLAN5 for PVE management & also make it VLAN aware with a traditional Linux bridge. I would like to also use this in the future for creating a 'Cluster' setup.

bond1
For bond1 I was thinking on using the other two ports eno3 & eno4. For this I would like to use this bond as bridge port & try the 802.3ad mode. Apparently this is the best mode to use if you can configure the switch. I have an old HP managed switch that I'm currently reading up on (aggregation & LACP), & again I just want to test it out.

My intension is to add a VLAN tag when creating VMs. I have no idea if it's even possible to have a mix of linux bonds on the same NIC. And I'm open to any suggestions that anyone may have on how to make the best use of all 4 ports.

As I mentioned earlier, I've just created this after reading the wiki page & some posts on the forum. Here's the configuration for my /etc/network/interfaces I was thinking on.

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-miimon 100
    bond-mode active-backup
    bond-primary eno1

auto vmbr0
iface vmbr0 inet static
    address XX.XX.1.1/24
    gateway XX.XX.1.1
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

iface bond0.5 inet manual

auto vmbr0v5
iface vmbr0v5 inet static
        address  xx.xx.5.1/24
        gateway  xx.xx.1.1
        bridge-ports bond0.5
        bridge-stp off
        bridge-fd 0

auto bond1
iface bond1 inet manual
      bond-slaves eno3 eno4
      bond-miimon 100
      bond-mode 802.3ad
      bond-xmit-hash-policy layer2+3

auto vmbr1
iface vmbr1 inet static
      address XX.XX.10.1/24
      gateway XX.XX.X.1
      bridge-ports bond1
      bridge-stp off
      bridge-fd 0
      bridge-vlan-aware yes
      bridge-vids 2-4094

Here is a diagram of my topology..
Thanks,
Darren

MyTopology.png
 
Last edited:
You can not have more than one gateway address in the pve/networks. Are the router and pfSense the same physical device in your diagram?
 
  • Like
Reactions: darren2517
Hi, thanks for your reply.. I have removed the second gateway from vmbr1, I have just marked the port on the router that pfsense will be plugged into.
 
You can have an IP/CIDR on vmbr1 if that is a different subnet and you want Proxmox to respond from there, just no second gateway required or allowed.

Could you not plug your modem directly into pfSense and skip the router? You will likely have a double NAT situation with modem-> router->pfSense->switch.

Defining a hwaddress for each vmbr* and even iface is not an absolute requirement. Proxmox will automatically do that for you under typical conditions. Some need it as a problem solver though.

Yes you can make 2 different types of linux bonds, active backup and LACP (this one only if you switch supports it), as you have laid out here with your 4 nics. I usually use bond-xmit-hash-policy layer3+4 for my LACP/802.3ad bonds but either will work and you can investigate which you prefer.
 
  • Like
Reactions: darren2517
In some situations it also might make sense to use dedicated NICs for a purpose. For example corosync or pfsync should get a own NIC for cluster communication. And its often useful to use a dedicated storage backend. Lets say you want to do some big backups to your TrueNAS VM while gaming online. For online gaming you want low latencies but if the backup is flooding the network to the limit your ping will be bad. If your NAS would be on its own subnet with its own NIC it wouldn't interfere with the stuff you do online that much.

I think I would use 2 NICs as LACP bond for the storage backend and 2 NICs as LACP bond for the frontend for everything else. If you later decide to add another PVE server and you want to build a cluster you can destroy the frontend bond and just use one NIC for the frontend and one NIC für cluster communiation.
 
  • Like
Reactions: darren2517
You can have an IP/CIDR on vmbr1 if that is a different subnet and you want Proxmox to respond from there, just no second gateway required or allowed.

Could you not plug your modem directly into pfSense and skip the router? You will likely have a double NAT situation with modem-> router->pfSense->switch.

Defining a hwaddress for each vmbr* and even iface is not an absolute requirement. Proxmox will automatically do that for you under typical conditions. Some need it as a problem solver though.

Yes you can make 2 different types of linux bonds, active backup and LACP (this one only if you switch supports it), as you have laid out here with your 4 nics. I usually use bond-xmit-hash-policy layer3+4 for my LACP/802.3ad bonds but either will work and you can investigate which you prefer.
I've put the IP address back into vmbr1 again & put it on subnet .10/24. does it look ok now?

I'm hoping to use pfsense to replace my router but I haven't got it installed yet (I'm still at the planning stage).

I have removed the hwaddress.. I had read somewhere that having these would prevent some problems, hopefully I won't need them.

Thats great to hear that I can use different bond types from the same NIC. Would I need a layer3 switch to utilize bond-xmit-hash-policy layer3+4. I have been trying to set this up on my HP switch but it seems to be a bit over complicated in comparison to something like the Unifi switches. I will try and investigate further.. thank you.
 
In some situations it also might make sense to use dedicated NICs for a purpose. For example corosync or pfsync should get a own NIC for cluster communication. And its often useful to use a dedicated storage backend. Lets say you want to do some big backups to your TrueNAS VM while gaming online. For online gaming you want low latencies but if the backup is flooding the network to the limit your ping will be bad. If your NAS would be on its own subnet with its own NIC it wouldn't interfere with the stuff you do online that much.

I think I would use 2 NICs as LACP bond for the storage backend and 2 NICs as LACP bond for the frontend for everything else. If you later decide to add another PVE server and you want to build a cluster you can destroy the frontend bond and just use one NIC for the frontend and one NIC für cluster communiation.
Hi Dunuin, yes this is exactly the setup that I would like..

2 NICs as LACP bond for my TrueNAS backend
1 NIC for the frontend
1 NIC for the cluster

So, can this be done by just assigning a different subnet for the frontend and for the cluster or do I need to create VLANs for them? I'm afraid to get loopback when I connect the cables into my switch.

Can I get this information to do this on the Proxmox wiki pages? Can you point me in the right direction?

Thank you
 
Last edited:
I've put the IP address back into vmbr1 again & put it on subnet .10/24. does it look ok now?

I'm hoping to use pfsense to replace my router but I haven't got it installed yet (I'm still at the planning stage).

I have removed the hwaddress.. I had read somewhere that having these would prevent some problems, hopefully I won't need them.

Thats great to hear that I can use different bond types from the same NIC. Would I need a layer3 switch to utilize bond-xmit-hash-policy layer3+4. I have been trying to set this up on my HP switch but it seems to be a bit over complicated in comparison to something like the Unifi switches. I will try and investigate further.. thank you.
Example below for what I mean with allowing the address and removing the gateway on vmbr1:
Code:
auto vmbr1
iface vmbr1 inet static
    address XX.XX.10.1/24
    bridge-ports bond1
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
So only leave the gateway line on the Linux bridge connected to pfSense. I’ll digress and recommend OPNsense over pfSense here. Just better people behind opnsense in my experience.
 
Last edited:
  • Like
Reactions: darren2517
Example below for what I mean with allowing the address and removing the gateway on vmbr1:
Code:
auto vmbr1
iface vmbr1 inet static
    address XX.XX.10.1/24
    bridge-ports bond1
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
So only leave the gateway line on the Linux bridge connected to pfSense. I’ll digress and recommend OPNsense over pfSense here. Just better people behind opnsense in my experience.
Hi vesalius, I see what you mean now about removing the full 'gateway' line. I will look into opnsense now, as I said I'm just planning my setup & and open to all advise & ideas.

Thank you :)
 
  • Like
Reactions: darren2517
So, can this be done by just assigning a different subnet for the frontend and for the cluster or do I need to create VLANs for them? I'm afraid to get loopback when I connect the cables into my switch.

Can I get this information to do this on the Proxmox wiki pages? Can you point me in the right direction?
You can work with subnetmasks. But it would be better to use one physical switch per subnet or atleast a switch that can handle VLAN (or better tagged VLAN).
 
  • Like
Reactions: darren2517
A little background reading on the people still employed and leading netgate/pfSense.
https://opnsense.org/opnsense-com/
https://arstechnica.com/gadgets/202...olations-and-bad-code-freebsd-13s-close-call/
Wow! seems like the kinda things that happen when money & deadlines come into play. I'm thinking that my TrueNAS Core is running on FreeBSD!! I don't have anything that valuable on there but I would not like my systems to crash or be hacked. I think that I'll forget about pfsense & wireguard then, which is a pity because there are lots of people using them & lots of helpful videos out there. I did have a quick look at OPNSense & it looks a lot like pfsense.. is it a fork of it or something? Again sorry, I'm just new to Linux
 
I think OPNsense/pfsense (yes, its a fork) isn't the big problem if setup correctly. Atleast you get regular updates fixing security holes. Your ISPs router will have alot of security holes too but there you often won't get any updates at all after some months/years. And with a hardware firewall like OPNsense you get alot of features increasing security that your ISPs router won't offer because it is targeted to consumers. It probably can't handle VLANs and can't route between several local networks, so you can't split your LAN into multiple isolated DMZs. It got no intrusion prevention system blocking botnets. No recursive DNS resolver. No option to create powerful firewall rules and so on.
And if you don't want OPNsense/pfsense there are other hardware firewalls too like Sophos XG Firewall, OpenWRT, IPfire, Untangles NG firewall...

I think the biggest problem are bad configurations. OPNsense is so complex and you got so much options that it is really hard to not screw up your security. One wrong rule or clicking the wrong checkbox and your LAN is completely open to the internet without any protection. So if you don't exactly know what you are doing its not a bad idea to put your ISPs router infront of your OPNsense. So you still get the basic protection of your ISPs firewall blocking 99,9% of the ports in case you screw up something. And OPNsense can still increase your security by providing the stuff written above if you port-forward from your ISPs router to your OPNsense WAN.
 
Last edited:
  • Like
Reactions: darren2517
Monowall->pfSense->OPNsense. pfSense was a fork of monowall and then OPNsense was a fork of pfSense. Don't be scared off from WireGuard, the issue was the guy negate hired, then defended, did a terrible job porting WireGuard code over to freebsd. WireGuard founding developer Jason Donenfeld looked over the port, because wireguard is his baby, and said wait a minute no way should this be released. Jason Donenfeld then completely rewrote the port himself and threw out the netgate funded badness.

WireGuard on linux, windows, mac etc was never effected by the above. I use WireGuard on opnsense today to VPN into my home network when away. Even pfSense threw out the bad WireGuard implementation they prematurely released and moved over to the freebsd code Jason Donenfeld rewrote.

I also agree with @Dunuin, take your time and learn the firewall/router you choose. While doing so leaving your your ISPs router as a first line defense is fine. Lots of options initially with either *sense, but you will get the hang of it before long. plenty of vidoes and write-ups on both to get the basics down. can even run either as a VM on promox if you so choose.
 
Last edited:
  • Like
Reactions: darren2517
Hi guys,
Thank you.. @Dunuin & @vesalius for all your help.

Well I've tried a few things & have managed to screw up my Node's ability to ping anything! As it happens the old network switch that I have is overly complicated.. & it's definitely an old Enterprise device! I had to use the GUI to preform many steps & even with being careful it still didn't work. I tried to setup the LACP for two of my ports and even though I've since removed all the settings, my node cannot ping anything. After I made all the necessary changes in the switch I then plugged-in all four cables.. and I get this problem. Everything inside the node (all the VMs) can operate as normal, being able to see each other on the network & get all updates etc. but the node itself cannot.

I've been doing some more research & I've come to the conclusion that I'd be better off using software to create & setup the LAGG. I've read that since OPNsense is also FreeBSD based, it has the ability to create a linux LAGG that I can use with my TrueNAS Core VM.
See here..
https://techexpert.tips/opnsense/opnsense-link-aggregation/

So my question now is.. how can I first reset my network & is there an option to do this? If so, how can I then enable all 4 NICs to work independently of each other.

I was thinking that I should start with only plugging-in one Ethernet cable?

Any help or advise is much appreciated!
 
Last edited:
Can you post your current /etc/network/interfaces file? and with that tell us what you IP are unable to ping and from where (what IP/subnet).
 
  • Like
Reactions: darren2517
Yes, well I have reverted back to the default settings. I have one ethernet cable plugged-in to eno1 and going to the switch.

Current /etc/network/interfaces looks like this..
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual


auto vmbr0
iface vmbr0 inet static
        address 192.168.1.4/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

My network is flat with everything on the one subnet. The node cannot ping the gateway router, google.com, 8.8.8.8 or 1.1.1.1 or get updates. I'm using cloudflare for my DNS settings. The only machine I can ping is the PC that I'm using to access the GUI.

The Firewall on the Node is off

ip addr
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether ec:f4:bb:d0:7c:78 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ec:f4:bb:d0:7c:79 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ec:f4:bb:d0:7c:7a brd ff:ff:ff:ff:ff:ff
    altname enp1s0f2
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ec:f4:bb:d0:7c:7b brd ff:ff:ff:ff:ff:ff
    altname enp1s0f3
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ec:f4:bb:d0:7c:78 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.4/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::eef4:bbff:fed0:7c78/64 scope link
       valid_lft forever preferred_lft forever

ip r
Code:
default via 192.168.1.1 dev vmbr0 proto kernel onlink 
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.4
 
Last edited:
Don't think Proxmox is or maybe even was the issue here. How are the HP switch ports configured? Are they passing all traffic, set to default vlan or ...? From the initial diagram it suggest that Ubuntu/docker/*sense/nginx/pihole are all running on the same box, is that true and if so what is the host OS and how is that networking configured?
 
The switch is using all default settings.. no configuration at all. I did try & setup two LACP ports but have since reverted it back to default & restarted. All ports are passing all traffic on the default VLAN1.

The Ubuntu/docker box is not setup yet!

I have installed Proxmox 7.0 OS onto the box labeled 'Ubuntu, docker' etc. but nothing else, as I was going to try a Cluster setup. This box can get updates and is upgraded to Proxmox 7.0-8.

The Dell R710 is currently running Proxmox 7.0-11.. I think that this is unusual? (-11 seems to be too high)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!