I have a Proxmox cluster (5 host nodes) where each host nodes uses vmbr0 for network access (contents of /etc/network/interfaces includes):
Cluster VMs on different host nodes then communicate via:
I'd like to retrospectively use a bond of eno1 and eno2 for host nodes.
1. Can I simply add the following to each host node?
2. Is above bond0 / vmbr0 compatible with Proxmox cluster (i.e. if I do this on every host node)?
3. Is the bond-mode ok?
4. Can I add this retrospectively to each host node (even though already in cluster using vmbr0)?
5. Is the bond compatible with vmbr2 / use of vxlan?
Thanks in advance!
Code:
auto vmbr0
iface vmbr0 inet static
address 10.XXX.XXX.XXX/24
gateway 10.XXX.XX.X
bridge-ports eno1
bridge-stp off
bridge-fd 0
Cluster VMs on different host nodes then communicate via:
Code:
auto vmbr2
iface vmbr2 inet manual
bridge_ports vxlan2
bridge_stp off
bridge_fd 0
auto vxlan2
iface vxlan2 inet manual
vxlan-id 2
vxlan_remoteip 10.XXX.XXX.XXX
vxlan_remoteip 10.XXX.XXX.XXX
vxlan_remoteip 10.XXX.XXX.XXX
vxlan_remoteip 10.XXX.XXX.XXX
I'd like to retrospectively use a bond of eno1 and eno2 for host nodes.
1. Can I simply add the following to each host node?
Code:
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.XXX.XXX.XXX
netmask 255.255.255.0
gateway 10.XXX.XXX.X
bridge-ports bond0
bridge-stp off
bridge-fd 0
2. Is above bond0 / vmbr0 compatible with Proxmox cluster (i.e. if I do this on every host node)?
3. Is the bond-mode ok?
4. Can I add this retrospectively to each host node (even though already in cluster using vmbr0)?
5. Is the bond compatible with vmbr2 / use of vxlan?
Thanks in advance!
Last edited: