Question about Link Aggregation and VMs

ceftee

New Member
Oct 2, 2022
3
0
1
Hi there,

I'm in the process of expanding my homelab and I want to get started with Proxmox and virtualization. I haven't found any information in previous threads regarding my question, so I apologize in advance if I missed it.

I'm rebuilding my setup with a refurbished HPE Proliant DL380 Gen9 server and a Ubiquiti 48-port L2 Gigabit Switch, and I just upgraded to a gigabit fiber connection. I want to use the server for virtualization with Proxmox VE and create a TrueNAS Core VM for my Home NAS requirements, since it is the software I'm most familiar with for that specific purpose. I want to increase my available bandwidth by configuring Link Aggregation. I've already verified that the included FLOM in my server (HP 331FLR Quad-Port Gigabit) and my switch both support Link Aggregation by the IEEE 802.3ad standard.

What I want to clear up is the following:

1) Setting up a 4-port aggregation LAN connection on Proxmox VE would make that additional bandwidth available automatically for each of my VMs (let's say, my TrueNAS Core VM would be running on the same bandwidth) or are there other hardware/software configuration requirements?

2) If further configuration is required, would I need to acquire a separate PCIe 4-port Gigabit NIC (like the Intel I350-T4 for example) and then pass it through via PCIe to the TrueNAS Core VM in order to be able to configure that connection inside the VM itself?

Please forgive me if this is too basic or even a bit nonsensical, I'm just very new to this stuff and I want to hit the ground running :)

Thanks for the Help
-ceftee
 
A bonding interface (link aggregation) is one port of the bridge. The other ports are the tap interfaces for the VMs.
Your VM does not need to have its own bonding interface to profit from the hardware bonded interface of the host.
But remember that you only get "more" bandwidth if you have multiple participants on the network. Between two devices (your TrueNAS VM and one client) the packets would be balanced over one physical interface usually. That depends on the hash algorithm you can chose when configuring the LACP bond. Linux offers layer2+3 or layer3+4 for example. The switch may just use layer 2 addresses to balance packets on the ports of the bonding interface.
 
A bonding interface (link aggregation) is one port of the bridge. The other ports are the tap interfaces for the VMs.
Your VM does not need to have its own bonding interface to profit from the hardware bonded interface of the host.
But remember that you only get "more" bandwidth if you have multiple participants on the network. Between two devices (your TrueNAS VM and one client) the packets would be balanced over one physical interface usually. That depends on the hash algorithm you can chose when configuring the LACP bond. Linux offers layer2+3 or layer3+4 for example. The switch may just use layer 2 addresses to balance packets on the ports of the bonding interface.
Ok, so basically I do not need a dedicated NIC to take advantage of the aggregation. But when you say “multiple participants”, you mean multiple users on the VM (for example, multiple people requesting files from the NAS)? My switch is Layer 2 according to description, so maybe Link Aggregation won’t bring me the performance benefits I was looking for
 
  • Like
Reactions: gurubert