issue with vmbr0: port 1(eno1) entered disabled state

killmasta93

Renowned Member
Aug 13, 2017
958
56
68
30
Hi,
I was wondering if someone else has had this issue before. So today morning none of the Vms could not get a ping, i had to reboot forcefully and started to work. I checked the logs and found this
Code:
Jul  4 07:12:07 prometheus kernel: [10076436.122355] igb 0000:06:00.0 eno1: igb: eno1 NIC Link is Down
Jul  4 07:12:07 prometheus kernel: [10076436.122602] vmbr0: port 1(eno1) entered disabled state
Jul  4 07:12:11 prometheus kernel: [10076439.418586] igb 0000:06:00.0 eno1: igb: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Jul  4 07:12:11 prometheus kernel: [10076439.418793] vmbr0: port 1(eno1) entered blocking state
Jul  4 07:12:11 prometheus kernel: [10076439.418799] vmbr0: port 1(eno1) entered forwarding state
Jul  4 07:15:24 prometheus kernel: [10076632.902732] igb 0000:06:00.0 eno1: igb: eno1 NIC Link is Down
Jul  4 07:15:24 prometheus kernel: [10076632.902773] vmbr0: port 1(eno1) entered disabled state
Jul  4 07:15:28 prometheus kernel: [10076636.366730] igb 0000:06:00.1 eno2: igb: eno2 NIC Link is Down
Jul  4 07:15:28 prometheus kernel: [10076636.366793] vmbr1: port 1(eno2) entered disabled state
Jul  4 07:15:28 prometheus kernel: [10076636.596040] igb 0000:06:00.0 eno1: igb: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Jul  4 07:15:28 prometheus kernel: [10076636.596264] vmbr0: port 1(eno1) entered blocking state
Jul  4 07:15:28 prometheus kernel: [10076636.596274] vmbr0: port 1(eno1) entered forwarding state
Jul  4 07:15:29 prometheus kernel: [10076638.031009] igb 0000:06:00.1 eno2: igb: eno2 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX
Jul  4 07:15:29 prometheus kernel: [10076638.031150] vmbr1: port 1(eno2) entered blocking state
Jul  4 07:15:29 prometheus kernel: [10076638.031154] vmbr1: port 1(eno2) entered forwarding state

This is my pve version
Code:
pve-manager/5.3-8/2929af8e (running kernel: 4.15.18-10-pve)

Thank you
 
These are the messages from the bridge on the STP state of the port connected.

EDIT: but the last lines for each interface show it in forwarding state.
 
They tell you if you have a loop somewhere in your network. And if STP works correctly, it will block a link to resolve the loop. As you stated, that you couldn't ping the VMs, it might have something to do with it, but it could also be another reason.
 
After update to latest 6.4.13 version with kernel 5.4.143-1-pve it's happening with me too:

[Wed Nov 24 01:02:10 2021] i40e 0000:1c:00.1 eno2: NIC Link is Down
[Wed Nov 24 01:02:10 2021] vmbr1: port 1(eno2) entered disabled state
[Wed Nov 24 01:02:16 2021] i40e 0000:1c:00.1 eno2: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None
[Wed Nov 24 01:02:16 2021] vmbr1: port 1(eno2) entered blocking state
[Wed Nov 24 01:02:16 2021] vmbr1: port 1(eno2) entered forwarding state
[Wed Nov 24 01:21:54 2021] i40e 0000:1c:00.1 eno2: NIC Link is Down
[Wed Nov 24 01:21:54 2021] vmbr1: port 1(eno2) entered disabled state
[Wed Nov 24 01:21:59 2021] i40e 0000:1c:00.1 eno2: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None
[Wed Nov 24 01:21:59 2021] vmbr1: port 1(eno2) entered blocking state
[Wed Nov 24 01:21:59 2021] vmbr1: port 1(eno2) entered forwarding state

# ethtool -i eno2
driver: i40e
version: 2.8.20-k
firmware-version: 3.33 0x80000e48 1.1876.0
expansion-rom-version:
bus-info: 0000:1c:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

# lspci -vvv|grep Ethernet
1c:00.0 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09)
Subsystem: Super Micro Computer Inc Ethernet Connection X722 for 10GBASE-T
1c:00.1 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09)
Subsystem: Super Micro Computer Inc Ethernet Connection X722 for 10GBASE-T

# dmesg |grep Ethernet
[ 4.288892] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 2.8.20-k

With previous 5.0.21-2-pve kernel it's not happening.

What I can do to fix this? Boot with the 5.0 kernel? Update i40 Intel driver? Thank you.
 
Confirmed, the problem is the i40e version 2.8.20-k with kernel 5.4.143-1-pve

I rebooted the Proxmox with kernel 5.0.21-5-pve that use i40e version 2.7.6-k and the problem doesnt happened.
 
Last edited:
hi guys,

i solved problem change to harddisk.

i was using sata ssd

i changed nvme disk and solved.

i think sata not enough for 10 or 1 gigabit lan

sory for my bad english
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!