eno1 &2 missing after restoring VMs and configuring frr

whytebredd

New Member
Sep 15, 2024
2
0
1
I have two Dell servers (R730 & R720XD) that are connected to the switch with two ethernet cables as bond0 under vmbr0 and connected to each other with 2 DAC cables. The bond was working well and I configured frr between the servers following the steps in this video https://www.youtube.com/watch?v=dAjw_4EpQdk
That worked fine so then I began restoring VMs from a back up to the R730. At about 40% it froze and now, 2 days later, that machine has no network except the frr. I can ssh to it from the R720XD on ipv6 but the link on ipv4 to the switch is down.

eno1 & eno2 are supposed to be in bond0 but do not appear in ip a and are "not recognized" when running ifup eno1/2

They are rack mounted and I haven't physically done anything to them so I'm 99% sure this would not be a hardware problem. The frr configs between prometheus and the other machine (covenant) are virtually identical. iperf3 between the two is >9Gbps. This didn't happen until the VM restore so I'm at a loss.

root@prometheus:~# ifdown eno1
root@prometheus:~# ifdown eno2
root@prometheus:~# ifdown vmbr0
root@prometheus:~# ifup eno1
warning: eno1: interface not recognized - please check interface configuration
root@prometheus:~# ifup eno2
warning: eno2: interface not recognized - please check interface configuration
root@prometheus:~# ifup vmbr0
error: vmbr0: cmd '/bin/ip route replace default via fd0f::1 proto kernel dev vmbr0 onlink' failed: returned 2 (Error: Nexthop has invalid gateway or device mismatch.
)

Here is ip a, ip route, ip -6 route, interfaces, and frr.conf of the affected machine. interfaces.d is empty.

root@prometheus:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 fd69:beef:cafe::103/128 scope global
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 80:18:44:e0:0c:62 brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 80:18:44:e0:0c:63 brd ff:ff:ff:ff:ff:ff
altname enp2s0f1
6: enp131s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:fd:fe:34:29:e0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::3efd:feff:fe34:29e0/64 scope link
valid_lft forever preferred_lft forever
7: enp131s0f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:fd:fe:34:29:e2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::3efd:feff:fe34:29e2/64 scope link
valid_lft forever preferred_lft forever
12: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue master vmbr0 state DOWN group default qlen 1000
link/ether 42:b6:e6:46:15:0b brd ff:ff:ff:ff:ff:ff
14: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 42:b6:e6:46:15:0b brd ff:ff:ff:ff:ff:ff
inet 192.168.1.103/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fd00::3/64 scope global tentative
valid_lft forever preferred_lft forever
root@prometheus:~# ip route
default via 192.168.1.1 dev vmbr0 proto kernel onlink linkdown
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.103 linkdown
root@prometheus:~# ip -6 route
fd00::/64 dev vmbr0 proto kernel metric 256 linkdown pref medium
fd0f::/64 nhid 63 proto ospf metric 20 pref medium
nexthop via fe80::a236:9fff:fe49:9d50 dev enp131s0f1np1 weight 1
nexthop via fe80::a236:9fff:fe49:9d52 dev enp131s0f0np0 weight 1
fd69:beef:cafe::103 dev lo proto kernel metric 256 pref medium
fe80::/64 dev enp131s0f0np0 proto kernel metric 256 pref medium
fe80::/64 dev enp131s0f1np1 proto kernel metric 256 pref medium
root@prometheus:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto enp131s0f0
iface enp131s0f0 inet manual

auto enp131s0f1
iface enp131s0f1 inet manual

auto enp131s0f0np0
iface enp131s0f0np0 inet manual

auto enp131s0f1np1
iface enp131s0f1np1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode balance-alb

auto vmbr0
iface vmbr0 inet static
address 192.168.1.103/24
gateway 192.168.1.1
bridge-ports bond0
bridge-stp off
bridge-fd 0

iface vmbr0 inet6 static
address fd00::3/64
gateway fd0f::1

source /etc/network/interfaces.d/*
root@prometheus:~# cat /etc/frr/frr.conf
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration shell, vtysh, dynamically edits the live, in-memory
# configuration while FRR is running. When instructed, vtysh will persist the
# live configuration to this file, overwriting its contents. If you want to
# avoid this, you can edit this file manually before starting FRR, or instruct
# vtysh to write configuration to a different file.
log syslog informational

ipv6 forwarding

!
interface lo
ipv6 address fd69:beef:cafe::103/128
ipv6 ospf6 area 0.0.0.0
ipv6 ospf6 passive

!
interface vmbr0
ipv6 ospf area 0.0.0.0
ipv6 ospf6 network broadcast
ipv6 ospf cost 100
!
interface enp131s0f0np0
ipv6 ospf6 area 0.0.0.0
ipv6 ospf6 network point-to-point
ipv6 ospf6 cost 10

!
interface enp131s0f1np1
ipv6 ospf6 area 0.0.0.0
ipv6 ospf6 network point-to-point
ipv6 ospf6 cost 10

!
router ospf6
ospf6 router-id 0.1.0.3
redistribute connected
auto-cost reference-bandwidth 100000
root@prometheus:~# ls /etc/network/interfaces.d
root@prometheus:~#

Not sure where to go from here. Any help is appreciated.
 
I moved the cables over and changed bond0 to use eno3 & eno4 so now everything works like normal. I'd still like to know what happened to 1&2 in case this happens again as they are all on the same card.
Prior to switching over dmesg was giving the error along the lines of "the permanent hwaddr of xxx is still in use by bond0"
eno1 & eno2 still don't show in ip a despite still having the auto flag in interfaces. Effectively zero activity re: eno1&2 in dmesg

root@prometheus:~# dmesg | grep eno1
[ 5.451997] tg3 0000:01:00.0 eno1: renamed from eth0
root@prometheus:~# dmesg | grep eno2
[ 5.509996] tg3 0000:01:00.1 eno2: renamed from eth1
root@prometheus:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 fd69:beef:cafe::103/128 scope global
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
4: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 80:18:44:e0:0c:62 brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 80:18:44:e0:0c:63 brd ff:ff:ff:ff:ff:ff
altname enp2s0f1
6: enp131s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:fd:fe:34:29:e0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::3efd:feff:fe34:29e0/64 scope link
valid_lft forever preferred_lft forever
7: enp131s0f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:fd:fe:34:29:e2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::3efd:feff:fe34:29e2/64 scope link
valid_lft forever preferred_lft forever
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 80:18:44:e0:0c:62 brd ff:ff:ff:ff:ff:ff
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 80:18:44:e0:0c:62 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.103/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fd00::3/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::8218:44ff:fee0:c62/64 scope link
valid_lft forever preferred_lft forever
root@prometheus:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto enp131s0f0
iface enp131s0f0 inet manual

auto enp131s0f1
iface enp131s0f1 inet manual

auto enp131s0f0np0
iface enp131s0f0np0 inet manual

auto enp131s0f1np1
iface enp131s0f1np1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno3 eno4
bond-miimon 100
bond-mode balance-alb

auto vmbr0
iface vmbr0 inet static
address 192.168.1.103/24
gateway 192.168.1.1
bridge-ports bond0
# bridge-ports eno2
bridge-stp off
bridge-fd 0

iface vmbr0 inet6 static
address fd00::3/64
gateway fd0f::1

source /etc/network/interfaces.d/*
root@prometheus:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!