Network setup.

ejkeebler

Member
May 10, 2021
15
1
8
49
I'm about to upgrade my homelab with a proper router and switch. I feel like it will be a good time to make changes to my proxmox config during that time. Currently I have proxmox installed on an r720 that has a quad nic, which I will be upgrading as well. I will go from the quad 1gb, to either a quad sfp+ or dual sfp+ dual 1gb. I am really only running two vms on it right now. 1 is a truenas vm and the other an ubuntu server running docker and lots of random services.

Currently I've just linked each vm to a physical nic. I think it would be better to create a bond of the quad nic and then I will share that nic with all the vm's on proxmox? Is that accurate? Will that not work with either quad nic scenario, or only if I use the quad SFP+? or am I thinking about this completely wrong and need to give each vm it's own nic like i've been doing?
 
Last edited:
I'm about to upgrade my homelab with a proper router and switch. I feel like it will be a good time to make changes to my proxmox config during that time. Currently I have proxmox installed on an r720 that has a quad nic, which I will be upgrading as well. I will go from the quad 1gb, to either a quad sfp+ or dual sfp+ dual 1gb. I am really only running two vms on it right now. 1 is a truenas vm and the other an ubuntu server running docker and lots of random services.

Currently I've just linked each vm to a physical nic. I think it would be better to create a bond of the quad nic and then I will share that nic with all the vm's on proxmox? Is that accurate? Will that not work with either quad nic scenario, or only if I use the quad SFP+? or am I thinking about this completely wrong and need to give each vm it's own nic like i've been doing?
Yes, you can just bond those NICs if your managed switch supports LACP. That way your host and both VMs can dynamically share the full bandwidth (so for example 4Gbit with quad Gbit). Another benefit if both VMs are on the same bridge is, that internal network communication is only limited by the performance of your CPU. So even with only Gbit NICs and without SFP+ your two VMs could communicate with eachother with 10Gbit/s (or even more if your CPU can handle that) as long as the packets aren'T leaving the server.
 
  • Like
Reactions: ejkeebler
thanks! I'll have to make that change! I guess proxmox will have the bond and it will include all nics and get an ip, then i'll just create a virtual nic for each vm, give it an ip and the gateway will be the ip of the bond? Or am I over complicating it? throwing vlans on top of that probably going to add another complication to iron out :)
 
thanks! I'll have to make that change! I guess proxmox will have the bond and it will include all nics and get an ip, then i'll just create a virtual nic for each vm,
Yes, but make sure to use virtio NIC and not Intel E1000 because only virtio is paravirtualized. The other virtual NICs will be slow.
give it an ip and the gateway will be the ip of the bond?
No, gateway will be your gateway of your LAN. So VMs should use the same gateway you assigned to your host. Normal case you don't route/nat so your VMs will just be bridged to your bond. Your host would only be the gateway if you use a NAT/routed setup.
Or am I over complicating it? throwing vlans on top of that probably going to add another complication to iron out :)
I would first try it without VLAN. If everything works you can easily add vlans later by switching the bridge to "vlan aware" mode and adding vlan tags to your virtual NICs.
 
  • Like
Reactions: ejkeebler
this is clearly the first time i've gone down this route, but I really want to make sure I break as little as possible tomorrow and Saturday.
1633021564069.png

That's currently my proxmox network config. Proxmox maps to vmbr0. I have 2 more vm's that I map VM#1 to vmbr1 and VM#2 to vmbr2. then when i test a vm I map it to vmbr3. So when I switch I should be able to bind eno1, eno2, eno3, eno4 and it will get one IP....in this case 192.168.23.21/24 . Then I wil create a linux bridge to the bond? and then that VM will get it's own IP on the same subent? and just keep creating a new linux bridge to the bond for as many VM's as I bring up?
 
Last edited:
I don't get why you want to create a bridge for each VM. Would only make sense to me if you want these VMs all in different subnets but in that case I would only use one bridge too and isolate the subnets with vlans.

So in general you only need three things.
1.) your eno1 to eno4 without any furter configuration like in your screenshot above
2.) create a new "Linux Bond" bond0. For "Slaves:" add "eno1 eno2 eno3 eno4", for "Mode:" add "LACP", for "Hash policy:" I would use "layer3+4". Make sure the autostart checkbox is enabled. For LACP bonds you need the same settings to be enabled in your managed switch. So you would need to setup LACP with layer 3+4 there too for the 4 ports you are using.
3.) edit your Linux bridge vmbr0 and change "Bridge ports:" to "bond0". If you want your PVE host to be accessible by 192.168.23.21/24 and to use 192.168.23.1 as your gateway keep the address and gateway from your screenshot.
4.) attach all your VMs that should use the 192.168.23.0/24 subnet to vmbr0
5.) click "apply configuration" for your managed switch and PVE network config.

Now your host and VMs should be using that bond.

If you don't got physical access to a console (WebKVM or keyboard + monitor attached) it also might be useful to keep eno1 and vmbr0 as it is and just create bond0 with eno2 to eno4 as slaves and a vmbr1 attached to that. That way you can check that you bond is working and if it doesn't work you still got access to your PVE host using eno1.

And if you later want to add VLANs:
5.) edit your vmbr0 and check the "vlan aware" checkbox.
6.) if you want your host to use a specific vlan for management/internet remove the address and gateway from vmbr0. Add a new "Linux VLAN". For "Name:" use "vmbr0.100" if you want it to be part of VLANID 100. "vmbr0.200" if it should be part of VLANID 200 and so on. Type in your address and gateway you removed from vmbr0.
7.) make sure you setup your managed switch correctly so the 4 ports used for your bond0 will use tagged VLANs as a trunk and your VLAN PVE should be part of is added to that trunk.
8.) apply configuration
9.) The PVE host should now be only accessible using that VLAN.
10.) Go for every VM to the WebUI to "Hardware -> Network Device (netX) -> Edit" and add the VLANID to "VLAN tag:" of which vlan this virtual NIC should be part of. That way you don't need to care abount VLANs inside the guest because the virtual NIC is doing the tagged/untagged traffic translation and the VMs only will send/receive untagged traffic. If you want to use VLANs inside the guest (for example for a OPNSense VM) just leave that "VLAN tag:" field empty and configure VLANs manually inside the guest.
 
Last edited:
I don't get why you want to create a bridge for each VM. Would only make sense to me if you want these VMs all in different subnets but in that case I would only use one bridge too and isolate the subnets with vlans.

So in general you only need three things.
1.) your eno1 to eno4 without any furter configuration like in your screenshot above
2.) create a new "Linux Bond" bond0. For "Slaves:" add "eno1 eno2 eno3 eno4", for "Mode:" add "LACP", for "Hash policy:" I would use "layer3+4". Make sure the autostart checkbox is enabled. For LACP bonds you need the same settings to be enabled in your managed switch. So you would need to setup LACP with layer 3+4 there too for the 4 ports you are using.
3.) edit your Linux bridge vmbr0 and change "Bridge ports:" to "bond0". If you want your PVE host to be accessible by 192.168.23.21/24 and to use 192.168.23.1 as your gateway keep the address and gateway from your screenshot.
4.) attach all your VMs that should use the 192.168.23.0/24 subnet to vmbr0
5.) click "apply configuration" for your managed switch and PVE network config.

Now your host and VMs should be using that bond.

If you don't got physical access to a console (WebKVM or keyboard + monitor attached) it also might be useful to keep eno1 and vmbr0 as it is and just create bond0 with eno2 to eno4 as slaves and a vmbr1 attached to that. That way you can check that you bond is working and if it doesn't work you still got access to your PVE host using eno1.

And if you later want to add VLANs:
5.) edit your vmbr0 and check the "vlan aware" checkbox.
6.) if you want your host to use a specific vlan for management/internet remove the address and gateway from vmbr0. Add a new "Linux VLAN". For "Name:" use "vmbr0.100" if you want it to be part of VLANID 100. "vmbr0.200" if it should be part of VLANID 200 and so on. Type in your address and gateway you removed from vmbr0.
7.) make sure you setup your managed switch correctly so the 4 ports used for your bond0 will use tagged VLANs as a trunk and your VLAN PVE should be part of is added to that trunk.
8.) apply configuration
9.) The PVE host should now be only accessible using that VLAN.
10.) Go for every VM to the WebUI to "Hardware -> Network Device (netX) -> Edit" and add the VLANID to "VLAN tag:" of which vlan this virtual NIC should be part of. That way you don't need to care abount VLANs inside the guest because the virtual NIC is doing the tagged/untagged traffic translation and the VMs only will send/receive untagged traffic. If you want to use VLANs inside the guest (for example for a OPNSense VM) just leave that "VLAN tag:" field empty and configure VLANs manually inside the guest.
ok, so just use the same bond for all vms? I only have it set the way I have it set now because I just thought each was going to have to use a physical nic. it sounds like once I create the bond, I will only need the one bridges, and all vm's will just use the same vmbr0, but have different ips?
 
ok, so just use the same bond for all vms? I only have it set the way I have it set now because I just thought each was going to have to use a physical nic. it sounds like once I create the bond, I will only need the one bridges, and all vm's will just use the same vmbr0, but have different ips?
Yes, you can attach as many VMs as you want to a single bridge and all can have their own IP. So all VMs and the host share the same bridge and same bond so they can all share the full 4Gbit bandwidth.
 
Last edited:
Yes, you can attach as many VMs as you want to a single bridge and all can have their own IP. So all VMs and the host share the same bridge and same bond so they can all share the full 4Gbit bandwidth.
Thanks! that's the part I did not understand for some reason.
 
Thanks! that's the part I did not understand for some reason.
For a better understanding: A bridge is just the virtual version of a physical switch. And attaching virtual or physical NICs to it is like plugging in a ethernet cable between NIC and switch.

But a benefit is that the bandwidth of the bridge is only limited by the performance of your CPU. So as long as the packets won't leave your host, for example two VMs communicating with eachother, you can get 10 Gbit Bandwidth and more. So if VMs should communicate with eachother it is very useful to have them on the same bridge. If you give each VM a dedicated bridge and NIC you will force the VM to VM communication to leave your host, through your physical switch and back in to your host. So that way VM to VM communication would be limited to 1Gbit.
 
Last edited:
  • Like
Reactions: ejkeebler
For a better understanding: A bridge is just the virtual version of a physical switch. And attaching virtual or physical NICs to it is like plugging in a ethernet cable between NIC and switch.

But a benefit is that the bandwidth of the bridge is only limited by the performance of your CPU. So as long as the packets won't leave your host, for example two VMs communicating with eachother, you can get 10 Gbit Bandwidth and more. So if VMs should communicate with eachother it is very useful to have them on the same bridge. If you give each VM a dedicated bridge and NIC you will force the VM to VM communication to leave your host, through your physical switch and back in to your host. So that way VM to VM communication would be limited to 1Gbit.
Thanks again, thats super helpful! I'm not doing anything crazy, so its definitely overkill, but I want to do it right if I'm going to do it, and maybe some day I will use it to full capacity. Right now one VM is a NAS and one is a docker server that I run plex on, and sometimes host a local game server like MC or Dont Starve for myself and my son, my own password manager, home assistant, not a ton of data needed. But I would like to start getting some backups and syncing happening from mostly wireless devices, but eventually, who knows.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!