Hello all,
I'm trying to setup a new proxmox server and I'm kinda stuck at a specific point. Any help would be appreciated. First, my system:
1) Hardware: HP ProLiant DL360p Gen8 with a Smart array P420i
2) Software: Due to the Smart array P420i (long story short) I had to first install Debian Bullseye and then Proxmox 7.1 on top of Debian. No issues here and I think it's unrelated to the problem
Now the problem:
The DL360p comes with a 4 port LOM (Lan On Motherboard) that's basically a four port BCM5719. My goal is to setup 2 of them in a bond and reserve the other 2 for another project.
So, following https://pve.proxmox.com/wiki/Network_Configuration specifically the "Linux Bond" chapter, I've created a bond0 (round-robin since a have a crappy switch) without giving it an IP address and, then a bridge (vmbr0) with a static address.
The problem is that, when I create the bridge, the bond gets an IP from God knows where and I cant access the system. I can ssh to the bridge, but can't connect to the interface and the server can't go online.
A funny detail is that if I
Now, apparently this is because the default route is going through the bond:
1)
default dev bond0 scope link
default via 192.168.17.1 dev vmbr0 proto kernel onlink
169.254.0.0/16 dev bond0 proto kernel scope link src 169.254.48.255
192.168.17.0/24 dev vmbr0 proto kernel scope link src 192.168.17.21
If I change the default route to the vmbr0 interface with the correct gateway everything works until the next reboot.
Ok, so I tried to do solve this in /etc/network/interfaces by adding
The problem is that, since it takes some seconds for the bond to get an ip, the default dev bond0 scope link gets added AFTER the default via 192.168.17.1 dev vmbr0 and I end up with 2 default routes.
Any idea on how to solve this? Is it only happening to me??? I'm not a network expert so, any help you guys can provide on how to solve/troubleshoot is most welcome.
Best regards,
Some more info:
1)
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
auto eno3
iface eno3 inet manual
iface eno4 inet manual
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode balance-rr
bond-downdelay 200
bond-updelay 200
auto vmbr0
iface vmbr0 inet static
address 192.168.17.21/24
gateway 192.168.17.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
2)
6: bond0: <BROADCAST,MULTICAST,MASTER,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 8e:b0:ae:0b:46:f5 brd ff:ff:ff:ff:ff:ff
inet 169.254.253.178/16 brd 169.254.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::8cb0:aeff:fe0b:46f5/64 scope link
valid_lft forever preferred_lft forever
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8e:b0:ae:0b:46:f5 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.21/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::8cb0:aeff:fe0b:46f5/64 scope link
valid_lft forever preferred_lft forever
I'm trying to setup a new proxmox server and I'm kinda stuck at a specific point. Any help would be appreciated. First, my system:
1) Hardware: HP ProLiant DL360p Gen8 with a Smart array P420i
2) Software: Due to the Smart array P420i (long story short) I had to first install Debian Bullseye and then Proxmox 7.1 on top of Debian. No issues here and I think it's unrelated to the problem
Now the problem:
The DL360p comes with a 4 port LOM (Lan On Motherboard) that's basically a four port BCM5719. My goal is to setup 2 of them in a bond and reserve the other 2 for another project.
So, following https://pve.proxmox.com/wiki/Network_Configuration specifically the "Linux Bond" chapter, I've created a bond0 (round-robin since a have a crappy switch) without giving it an IP address and, then a bridge (vmbr0) with a static address.
The problem is that, when I create the bridge, the bond gets an IP from God knows where and I cant access the system. I can ssh to the bridge, but can't connect to the interface and the server can't go online.
A funny detail is that if I
systemctl restart networking
for a few seconds it works. But after a few seconds the bond gets an IP and then it stops working again.Now, apparently this is because the default route is going through the bond:
1)
ip route show
default dev bond0 scope link
default via 192.168.17.1 dev vmbr0 proto kernel onlink
169.254.0.0/16 dev bond0 proto kernel scope link src 169.254.48.255
192.168.17.0/24 dev vmbr0 proto kernel scope link src 192.168.17.21
If I change the default route to the vmbr0 interface with the correct gateway everything works until the next reboot.
Ok, so I tried to do solve this in /etc/network/interfaces by adding
post-up ip route replace default via 192.168.17.1 dev vmbr0
The problem is that, since it takes some seconds for the bond to get an ip, the default dev bond0 scope link gets added AFTER the default via 192.168.17.1 dev vmbr0 and I end up with 2 default routes.
Any idea on how to solve this? Is it only happening to me??? I'm not a network expert so, any help you guys can provide on how to solve/troubleshoot is most welcome.
Best regards,
Some more info:
1)
cat /etc/network/interfaces
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
auto eno3
iface eno3 inet manual
iface eno4 inet manual
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode balance-rr
bond-downdelay 200
bond-updelay 200
auto vmbr0
iface vmbr0 inet static
address 192.168.17.21/24
gateway 192.168.17.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
2)
ip a
6: bond0: <BROADCAST,MULTICAST,MASTER,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 8e:b0:ae:0b:46:f5 brd ff:ff:ff:ff:ff:ff
inet 169.254.253.178/16 brd 169.254.255.255 scope global bond0
valid_lft forever preferred_lft forever
inet6 fe80::8cb0:aeff:fe0b:46f5/64 scope link
valid_lft forever preferred_lft forever
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8e:b0:ae:0b:46:f5 brd ff:ff:ff:ff:ff:ff
inet 192.168.17.21/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::8cb0:aeff:fe0b:46f5/64 scope link
valid_lft forever preferred_lft forever