Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

hi

got similar logs here with XL710 . It produces a lot of logs.

Code:
Mar 22 23:16:19 opx kernel: i40e: Intel(R) Ethernet Connection XL710 Network Driver
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC adding RX filters on PF, promiscuous mode forced on
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

i'm on

Code:
pve-manager/7.4-17/513c62be (running kernel: 5.15.131-2-pve)

the kernel module is

Code:
srcversion:     53E40F02D13135369F99EBD
vermagic:       5.15.131-2-pve SMP mod_unload modversions

i checked the options with ethtool

Code:
ethtool -k enp3s0f1 | grep rx-vlan

Code:
rx-vlan-offload: on
rx-vlan-filter: on [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]

my interface vlan conf looks like :


Code:
auto vmbr1
iface vmbr1 inet static
        bridge-ports enp3s0f1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#vrack_prive

auto vmbr1.11
iface vmbr1.11 inet static
        address 10.0.1.60/24
#interlan

...
# got 6 vlans
 
Same here with an xl710 x4 wihich is passed through to a VM running mikrotik.
No vlans

Proxmox 8.1.5
linux 6.5.13-3-pve
 
Last edited:
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off
 
Last edited:
  • Like
Reactions: dxun and i_am_jam
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off

I just upgraded from 8.1 to 8.2 and had this exact same issue crop up where it didn't before. Adding the offload-rx-vlan-filter off removed a ton of errors similar to others reported.
 
  • Like
Reactions: iprowell
Any update on this?
 
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off

Thank you for this post - this helped me solve the problem with X710-DA4 and Proxmox 8.3.
 
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off
I've had similar problems and I've changed the interfaces file to what you have here, with only difference being:
Code:
bridge-vids 2-4094
And today one of the the servers decided to spew out plenty of:

Code:
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
While becoming unreachable (thank god for remote PDU's)

Nice thing to happen on Christmas eave, but plenty of redundancy in the cluster picked up the slack ;) Still would be nice to get this resolved.
 
And today like on queue, second server in the test cluster decided to noop out of the network with exactly the same messages in syslog, exactly the same hardware config, exactly the same interfaces file.
 
And today like on queue, second server in the test cluster decided to noop out of the network with exactly the same messages in syslog, exactly the same hardware config, exactly the same interfaces file.
We reverted a cluster to Kernel 6.5 and the issues where gone. Maybe that also solves your problem.
 
Yeah, but you know ... this issue is affecting people since 2020 - I doubt that 6.5 was even in numbering pipeline then. Not saying that 6.5 being somehow magical, but that would require a double tap - a fix into 6.5 and then a mess-up post 6.5. You know I like coincidences, but double coincidences are that harder to buy. Again, I'm not trying to pick on you ... I might be missing something very obvious, and since I'm dumb it eludes me.
 
So wait until this is added to an PVE Update...
Yes, hopefully the patch can be added in the near future

Cards with firmware version 7.2 or lower don't seem to have the issue with the current driver (srcversion: 15C57BC76BC78CF0FFE1D5A).
I was only able to reproduce the problem with cards that have firmware version 9.40.
 
Do you know whenever it already is or when it would show up in kernel ?
I can confirm that driver version v2.27.8 resolves the issue. We now need someone with the expertise to integrate the new driver into the PVE kernel.
 
Last edited:
I can confirm that driver version v2.27.8 resolves the issue. We now need someone with the expertise to integrate the new driver into the PVE kernel.
I think (or more hope) that those things are getting through the "backports" channel. Also it may depends on how severe the fix is. We just need to know which mainline kernel it got into then we can trace it into current version that pve kernel is based on.
 
Last edited:
This affected 3, and once I migrate the other VMware servers up to 5 Proxmox servers.
I hope it gets resolved soon.

What can I do in the meantime other than wait on the devs?
 
Hey ;) Today, over half of my test cluster got affected by this bug (4 nodes out of 6) within 5 minutes of each other - PDU with watchdog was doing overtime, however one node somehow even thou it had this error happening it was responding to ping. Not great for ping based PDU watchdog. As they say "fun times".
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!