I've been searching, and cant find anything similar in any forum posts, so sorry if this is known... BUT..
I had a working cluster of 3 Intel NUC servers, each running ProxMox. I got a good deal on a newer NUC, so I bought it, and swapped the nvme and SATA drives from my slowest server into this new NUC, expecting it basically to just come back up and work as before.
The new NUC, as it turns out, has different NIC names, so networking didn't start up properly. I connected directly to the server, figured out what the NIC names were, and found some documentation online about editing the /etc/network/interfaces file, so I did. All I changed was the nic identifier (enp89s0 in this capture).
Networking now works, in so far as I can manage the server as a healthy member of the cluster, via it's IP of 192.168.1.253, and I can actually even migrate VMs to the server, but the VMs do not have access to the network, in either subnet.
What could cause networking to work from a host perspective, but not to work from the perspective of the VMs in this case? I have torn down and recreated the vmbr interfaces in the GUI, and I can't get it to do anything.
Just for a little more background, the reason I have this configured as a bond0 is because I used to have a USB-C NIC bonded with the built-in gig port, but I found this was more trouble than it was worth, but I didn't bother to remove all of the other bond related config. It is working on my other NUCs and was working on this one with the older hardware.
Any help is greatly appreciated! I've been pounding my head against the wall on this for a couple weeks on and off now, and my next attempt would just be a full reinstall, but I hate to do that and not learn something from my config error!!
I had a working cluster of 3 Intel NUC servers, each running ProxMox. I got a good deal on a newer NUC, so I bought it, and swapped the nvme and SATA drives from my slowest server into this new NUC, expecting it basically to just come back up and work as before.
The new NUC, as it turns out, has different NIC names, so networking didn't start up properly. I connected directly to the server, figured out what the NIC names were, and found some documentation online about editing the /etc/network/interfaces file, so I did. All I changed was the nic identifier (enp89s0 in this capture).
Networking now works, in so far as I can manage the server as a healthy member of the cluster, via it's IP of 192.168.1.253, and I can actually even migrate VMs to the server, but the VMs do not have access to the network, in either subnet.
What could cause networking to work from a host perspective, but not to work from the perspective of the VMs in this case? I have torn down and recreated the vmbr interfaces in the GUI, and I can't get it to do anything.
Just for a little more background, the reason I have this configured as a bond0 is because I used to have a USB-C NIC bonded with the built-in gig port, but I found this was more trouble than it was worth, but I didn't bother to remove all of the other bond related config. It is working on my other NUCs and was working on this one with the older hardware.
Any help is greatly appreciated! I've been pounding my head against the wall on this for a couple weeks on and off now, and my next attempt would just be a full reinstall, but I hate to do that and not learn something from my config error!!
Code:
auto lo
iface lo inet loopback
auto enp89s0
iface enp89s0 inet manual
auto bond0
iface bond0 inet manual
bond-slaves enp89s0
bond-miimon 100
bond-mode balance-rr
auto bond0.1
iface bond0.1 inet manual
#Workstations
auto bond0.10
iface bond0.10 inet manual
auto vmbr10
iface vmbr10 inet manual
bridge-ports bond0.10
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.1.253/24
gateway 192.168.1.1
bridge-ports bond0.1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#Workstation vlan