Problematic network interfaces after board upgrade

May 3, 2022
6
0
1
I upgraded the motherboard on my server running promox 7.2-11 (B550D4-4L).

IP link shows the following interfaces:

enp34s0onboard Intel i210
enp35s0onboard Intel i210
enp36s0onboard Intel i210
enp41s0onboard Intel i210 shared with IPMI
enp1s0f0PCIe NIC, 2x Intel 82576
enp1s0f1PCIe NIC, 2x Intel 82576
enx269685a21b18likely the IPMI dedicated LAN? Not sure why its visible

Connecting an ethernet cable to ANY of the interfaces aside from enp41s0 (and slaving it to vmbr0 in /etc/network/interfaces) results in the port continuing to show as down in ip link. Initially I thought it could be the motherboard ports being defective, however the exact same behavior is exhibited from a known good PCIe NIC.

I can pass the NIC through to VMs fine however havent had the chance to test if they are able to connect using it (will try next time im at the DC). When the PCIe NIC is passed back to the host, the device reappears in lspci however the interfaces do not return.

No lights activate on the ports, which implies no attempt is made to negotiate a link. No output in journalctl or dmesg is generated when disconnecting or connecting the cable.

Additionally, enp41s0 will just outright vanish after a seemingly random period of time (first time was after a minute, most recent time was after 6 days) if it is used. There is no drop in connection prior, the machine remains fully connected until the interface disappears. The only dmesg log about this is "enp41s0 left promiscuous mode".

interfaces file:
Code:
auto lo
iface lo inet loopback

iface enp34s0 inet manual

iface enp35s0 inet manual

iface enp36s0 inet manual

iface enp41s0 inet manual

iface enx269685a21b18 inet manual

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
    address xxx.xxx.xxx.xxx/29
    gateway xxx.xxx.xxx.xxx
    bridge-ports enp41s0 # changed to each interface during testing
    bridge-stp off
    bridge-fd 0
 
I have a gigabyte motherboard and network cards on intel 210 also do not work
 
What is the output from the following two commands?
lspci | grep Ethernet

ip a
 
What is the output from the following two commands?
lspci | grep Ethernet

ip a
Code:
➜ ~ lspci | grep Ethernet
01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
22:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
23:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
24:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
29:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
➜ ~ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: enp34s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a8:a1:59:c0:f5:11 brd ff:ff:ff:ff:ff:ff
5: enp35s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a8:a1:59:c0:f5:13 brd ff:ff:ff:ff:ff:ff
6: enp36s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a8:a1:59:c0:f5:14 brd ff:ff:ff:ff:ff:ff
7: enp41s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether a8:a1:59:c0:f5:12 brd ff:ff:ff:ff:ff:ff
8: enx269685a21b18: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 26:96:85:a2:1b:18 brd ff:ff:ff:ff:ff:ff
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a8:a1:59:c0:f5:12 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.xxx.xxx/29 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 xxxx:xxxx:7::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa1:59ff:fec0:f512/64 scope link
       valid_lft forever preferred_lft forever
 
Weird, might need more help than I can give. did you try to do passthrough or such on the system with the old motherboard? did you add the old proxmox boot drive to the new motherboard and start there? What version of Proxmox are you using?

by the lspci details I would expect you have enp1s0f0 and enp1s0f1 at least from ip a for the intel 82576 PCI card.
 
did you try to do passthrough or such on the system with the old motherboard? did you add the old proxmox boot drive to the new motherboard and start there?
no change to system other than motherboard. Passthorugh works as expected on both old and new motherboard

by the lspci details I would expect you have enp1s0f0 and enp1s0f1 at least from ip a for the intel 82576 PCI card.
yes, i'm suspecting its some driver/kernel weirdness. I can remotely mount an ISO and boot it using IPMI so I'm going to try a clean reinstall, my reasoning being that the install was originally for a different board, so the change may be causing some weirdness in how its handling interfaces
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!