Network connectivity broken in 5.11.21-1?

victorhooi

Member
Apr 3, 2018
239
16
23
34
Hi

I have just setup a new Proxmox system that I've recently set up on Proxmox 6.4.

Network card is a Mellanox ConnectX-5 with 100Gbe ports.

I've also installed the latest 5.11 kernel to test this out. I know that 5.11.x was working on previous installs on this system.

However, when I did an update yesterday, the current 5.11 kernel (5.11.21-1) seems to have broken networking - when I run with this kernel, the vmbr0 interface is in state UNKNOWN and I'm not able to ping the upstream router.

If I then boot back in the 5.4.119-1 kernel, the interface is back in state "UP", and I have network connectivity again.

Has anybody else seen this issue? Or know what's going on?

Thanks,
Victor
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
811
68
33
Could it be the interface name changed with kernel 5.11?
 

victorhooi

Member
Apr 3, 2018
239
16
23
34
Hmm, this is the output of ip addr, with the 5.4 kernel:
Code:
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e0:d5:5e:96:9a:18 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e0:d5:5e:96:9a:19 brd ff:ff:ff:ff:ff:ff
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 0c:42:a1:02:08:dc brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:42:a1:02:08:dc brd ff:ff:ff:ff:ff:ff
    inet 10.7.12.3/23 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::e42:a1ff:fe02:8dc/64 scope link
       valid_lft forever preferred_lft forever
7: enp3s0f3u1u3c2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c6:32:12:74:58:a9 brd ff:ff:ff:ff:ff:ff

And with the 5.11 kernel (I had to take a screenshot, as I can't SSH in to the box):

Screen Shot 2021-06-10 at 9.50.22 pm.png

If I check /etc/network/interfaces, I see that vmbr0 is bridging the port enp1s0.

In 5.11 - are you thinking it's been renamed to enp1s0np0?

Will simply editing the /etc/network/interfaces file directly fix this? Is that a good practice?
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
811
68
33
Yes, sometimes the naming changes because of the kernel or BIOS updates and so on.
It seens yours now contains 'np0' at the end.
Just change enp1s0 in /etc/network/interfaces to enp1s0np0 and you should be able to get it working with `ifreload`.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!