LAG / VLAN on node setup

ChrisTG74

Member
Nov 12, 2025
36
2
8
Hi there,
I tried the search-function but did not find any fruitful to my question:

Is it planned that the VE-installer will support definition of link-aggregation (like 802.3ad) and VLAN already while installing the node?
It is really a hassle to manually type-in an interfaces-file via the console after installation.

Other possibility would be to support some kind of "answer-file" for unattended installation. For example the installer could check for an XML-file on the media and if present read the values from there instead of the UI.

IMHO a decent soilution for this really needs to be considered to be "enterprise ready" as LAG and usage of VLANs are common practice in datacenters today.

Many thanks in advance.
 
There's already an open feature request: https://bugzilla.proxmox.com/show_bug.cgi?id=2164

So yes, it is planned down the line.

ChrisTG74 said:
Other possibility would be to support some kind of "answer-file" for unattended installation. For example the installer could check for an XML-file on the media and if present read the values from there instead of the UI.
FWIW, if you are already doing automated installations, using a first-boot script could be used to automate that.
 
Last edited:
  • Like
Reactions: ChrisTG74
Hi Christoph,
thanks for the information. Do you have any clue when the next round of installer-features will be released? It that done on a regular schedule?
TIA!
 
PS: For the "first-boot script" - is there any documentation how to us it along with the Proxmox VE installer? Thanks!
 
Do you have any clue when the next round of installer-features will be released? It that done on a regular schedule?
The installer is updated with each new major/minor release, i.e. when new ISOs are provided for any product.
 
  • Like
Reactions: ChrisTG74
This is part of my first-boot script that detects network interfaces and forms them into a bond.
For me, it's not an 802.3ad, but by adjusting some parameters, you should be able to get that too:


Bash:
# Set network configuration - BEGIN
nodename=$(hostname)

interfaces=$(pvesh get nodes/${nodename}/network --type eth  --output-format json | jq -r '[.[].iface] | join(" ")')
interface_array=($interfaces)
IFS=$'\n' interfaces_sorted=($(sort <<<"${interface_array[*]}"))

IPADDR_JSON=$(pvesh get /nodes/${nodename}/network/vmbr0 --output-format=json)
IPADDR=$(echo $IPADDR_JSON | jq -r '.cidr')
GATEWAY=$(echo $IPADDR_JSON | jq -r '.'gateway)

for i in `pvesh get nodes/${nodename}/network --type eth  --output-format json | jq -r '.[].iface'`; do
  pvesh set /nodes/${nodename}/network/${i} --mtu 9000 --type eth
done

pvesh delete /nodes/${nodename}/network/vmbr0
pvesh create /nodes/${nodename}/network --type bond --iface bond0 --bond_mode balance-rr --mtu 9000 --bond-primary "${interfaces_sorted[0]}" --slaves "${interfaces_sorted[0]} ${interfaces_sorted[1]}" --cidr "${IPADDR}" --gateway "${GATEWAY}" --autostart 1 --comments "Bond for Management-Traffic"
pvesh set /nodes/${nodename}/network
# END

This code deletes the initial vmbr0 device, sets the MTU to 9000, creates a new "balance-rr" bond and adds the first 2 hardware network interfaces to it.
Then it "applies" the new network configuration via "pvesh set".
 
  • Like
Reactions: ChrisTG74