Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

hi

got similar logs here with XL710 . It produces a lot of logs.

Code:
Mar 22 23:16:19 opx kernel: i40e: Intel(R) Ethernet Connection XL710 Network Driver
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC adding RX filters on PF, promiscuous mode forced on
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Mar 22 23:16:23 opx kernel: i40e 0000:03:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

i'm on

Code:
pve-manager/7.4-17/513c62be (running kernel: 5.15.131-2-pve)

the kernel module is

Code:
srcversion:     53E40F02D13135369F99EBD
vermagic:       5.15.131-2-pve SMP mod_unload modversions

i checked the options with ethtool

Code:
ethtool -k enp3s0f1 | grep rx-vlan

Code:
rx-vlan-offload: on
rx-vlan-filter: on [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]

my interface vlan conf looks like :


Code:
auto vmbr1
iface vmbr1 inet static
        bridge-ports enp3s0f1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#vrack_prive

auto vmbr1.11
iface vmbr1.11 inet static
        address 10.0.1.60/24
#interlan

...
# got 6 vlans
 
Same here with an xl710 x4 wihich is passed through to a VM running mikrotik.
No vlans

Proxmox 8.1.5
linux 6.5.13-3-pve
 
Last edited:
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off
 
Last edited:
  • Like
Reactions: dxun and i_am_jam
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off

I just upgraded from 8.1 to 8.2 and had this exact same issue crop up where it didn't before. Adding the offload-rx-vlan-filter off removed a ton of errors similar to others reported.
 
  • Like
Reactions: iprowell
Any update on this?
 
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off

Thank you for this post - this helped me solve the problem with X710-DA4 and Proxmox 8.3.
 
I was having the same issue described here with two different Proxmox servers. I updated /etc/network/interfaces as below, and it solved the problem. Thanks for the guidance!

Code:
iface vmbr0 inet manual
        bridge-ports enp4s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 10 11 12 13 14 15 16 50 51 77 101 102
        offload-rx-vlan-filter off
I've had similar problems and I've changed the interfaces file to what you have here, with only difference being:
Code:
bridge-vids 2-4094
And today one of the the servers decided to spew out plenty of:

Code:
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF
While becoming unreachable (thank god for remote PDU's)

Nice thing to happen on Christmas eave, but plenty of redundancy in the cluster picked up the slack ;) Still would be nice to get this resolved.
 
And today like on queue, second server in the test cluster decided to noop out of the network with exactly the same messages in syslog, exactly the same hardware config, exactly the same interfaces file.
 
And today like on queue, second server in the test cluster decided to noop out of the network with exactly the same messages in syslog, exactly the same hardware config, exactly the same interfaces file.
We reverted a cluster to Kernel 6.5 and the issues where gone. Maybe that also solves your problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!