Broadcom 10G NIC does not work with VLANs

Kenneth_H

Well-Known Member
Feb 28, 2017
30
1
48
31
Hi
So I finally got myself a 10G SFP+ switch and wanted to connect my Proxmox host to it.
The host is an HP DL360p Gen8. The server is equipped with a 2-port HP 530FLR-SFP+ FlexLOM NIC, which is basically an HP-branded Broadcom/QLogic controller.
When connected without being VLAN aware, it runs fine. But as soon as I make any of the two ports available with a VLAN-aware configuration, I get the attached traceback
traceback-bnx2x.png

I am running PVE 5.3.5:
Code:
proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
pve-kernel-4.15: 5.2-12
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-33
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-5
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-31
pve-container: 2.0-31
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-16
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-43
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1

Also, here is information on the network cards from "lspci -v"
lspci.png
 
* Are all firmware updates installed on the server?
(quite a few bugs usually disappear with newer firmwares)
* Else -does `dmesg` maybe provide some hints to where the problem might originate?
 
According to the HPE Support website and the firmware versions from the iLO4 adapter, then I have the latest versions.
When I have some time later this week, I will try to reproduce the problem and look at dmesg to see if something useful comes up
 
So I have now tried to reproduce the problem and recorded the output of dmesg to a file using this command:
Code:
dmesg | grep 'bnx' > bnx2.txt

And this is the result:
Code:
[    2.154065] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.712.30-0 (2014/02/10)
[    2.154177] bnx2x 0000:03:00.0: msix capability found
[    2.154325] bnx2x 0000:03:00.0: part number 394D4342-31383735-31543030-47303030
[    2.266135] bnx2x 0000:03:00.1: msix capability found
[    2.266330] bnx2x 0000:03:00.1: part number 394D4342-31383735-31543030-47303030
[    2.375634] bnx2x 0000:03:00.1 eno2: renamed from eth1
[    2.396529] bnx2x 0000:03:00.0 eno1: renamed from eth0
[   11.819169] bnx2x 0000:03:00.0 eno1: using MSI-X  IRQs: sp 103  fp[0] 105 ... fp[7] 112
[   12.022537] bnx2x 0000:03:00.0 eno1: NIC Link is Up, 10000 Mbps full duplex, Flow control: ON - receive & transmit
[   12.773933] bnx2x: [bnx2x_attn_int_deasserted3:4323(eno1)]MC assert!
[   12.774001] bnx2x: [bnx2x_mc_assert:720(eno1)]XSTORM_ASSERT_LIST_INDEX 0x2
[   12.774073] bnx2x: [bnx2x_mc_assert:736(eno1)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x00000100 0x00020017 0x0001005f
[   12.774131] bnx2x: [bnx2x_mc_assert:750(eno1)]Chip Revision: everest3, FW Version: 7_13_1
[   12.774202] bnx2x: [bnx2x_attn_int_deasserted3:4329(eno1)]driver assert
[   12.774224] bnx2x: [bnx2x_panic_dump:923(eno1)]begin crash dump -----------------
[   12.774263] bnx2x: [bnx2x_panic_dump:933(eno1)]def_idx(0x118)  def_att_idx(0x4)  attn_state(0x1)  spq_prod_idx(0x31) next_stats_cnt(0x2)
[   12.774367] bnx2x: [bnx2x_panic_dump:938(eno1)]DSB: attn bits(0x0)  ack(0x1)  id(0x0)  idx(0x4)
[   12.774431] bnx2x: [bnx2x_panic_dump:939(eno1)]     def (0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x119 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0)  igu_sb_id(0x0)  igu_seg_id(0x1) pf_id(0x0)  vnic_id(0x0)  vf_id(0xff)  vf_valid (0x0) state(0x1)
[   12.774625] bnx2x: [bnx2x_panic_dump:990(eno1)]fp0: rx_bd_prod(0x1c6)  rx_bd_cons(0x1)  rx_comp_prod(0x1d0)  rx_comp_cons(0x4)  *rx_cons_sb(0x4)
[   12.774666] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x4)
[   12.774745] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp0: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.774838] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp0: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.774930] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp0: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.775021] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x4 0x0)
[   12.775025] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x4 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.775230] bnx2x: [bnx2x_panic_dump:990(eno1)]fp1: rx_bd_prod(0x1c5)  rx_bd_cons(0x0)  rx_comp_prod(0x1cf)  rx_comp_cons(0x3)  *rx_cons_sb(0x3)
[   12.775324] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x3)
[   12.775396] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp1: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.775488] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp1: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.775600] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp1: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.775676] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x3 0x0)
[   12.775678] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x3 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.775802] bnx2x: [bnx2x_panic_dump:990(eno1)]fp2: rx_bd_prod(0x1c5)  rx_bd_cons(0x0)  rx_comp_prod(0x1cf)  rx_comp_cons(0x3)  *rx_cons_sb(0x3)
[   12.775864] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x3)
[   12.775925] bnx2x: [bnx2x_set_vlan_one:8479(eno1)]Set VLAN failed
[   12.775929] bnx2x: [bnx2x_vlan_configure_vid_list:12991(eno1)]Unable to config VLAN 272
[   12.776046] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp2: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.776118] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp2: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.779986] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp2: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.783952] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x3 0x0)
[   12.783955] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x3 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.790651] bnx2x: [bnx2x_panic_dump:990(eno1)]fp3: rx_bd_prod(0x1c5)  rx_bd_cons(0x0)  rx_comp_prod(0x1cf)  rx_comp_cons(0x3)  *rx_cons_sb(0x3)
[   12.795778] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x3)
[   12.798511] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp3: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.804231] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp3: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.809926] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp3: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.815478] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x3 0x0)
[   12.815482] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x3 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.824088] bnx2x: [bnx2x_panic_dump:990(eno1)]fp4: rx_bd_prod(0x1c5)  rx_bd_cons(0x0)  rx_comp_prod(0x1cf)  rx_comp_cons(0x3)  *rx_cons_sb(0x3)
[   12.829838] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x3)
[   12.832601] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp4: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.838295] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp4: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.844096] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp4: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.849508] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x3 0x0)
[   12.849513] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x3 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.858085] bnx2x: [bnx2x_panic_dump:990(eno1)]fp5: rx_bd_prod(0x1c5)  rx_bd_cons(0x0)  rx_comp_prod(0x1cf)  rx_comp_cons(0x3)  *rx_cons_sb(0x3)
[   12.863509] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x3)
[   12.866361] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp5: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.871692] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp5: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.877436] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp5: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.883212] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x3 0x0)
[   12.883216] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x3 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.891726] bnx2x: [bnx2x_panic_dump:990(eno1)]fp6: rx_bd_prod(0x1c5)  rx_bd_cons(0x0)  rx_comp_prod(0x1cf)  rx_comp_cons(0x3)  *rx_cons_sb(0x3)
[   12.896838] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x3)
[   12.899758] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp6: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.905708] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp6: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.911530] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp6: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.917385] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x3 0x0)
[   12.917389] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x3 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.926217] bnx2x: [bnx2x_panic_dump:990(eno1)]fp7: rx_bd_prod(0x1c5)  rx_bd_cons(0x0)  rx_comp_prod(0x1cf)  rx_comp_cons(0x3)  *rx_cons_sb(0x3)
[   12.931628] bnx2x: [bnx2x_panic_dump:993(eno1)]     rx_sge_prod(0x400)  last_max_sge(0x0)  fp_hc_idx(0x3)
[   12.934423] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp7: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.940183] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp7: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.946015] bnx2x: [bnx2x_panic_dump:1010(eno1)]fp7: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
[   12.951901] bnx2x: [bnx2x_panic_dump:1021(eno1)]     run indexes (0x3 0x0)
[   12.951905] bnx2x: [bnx2x_panic_dump:1027(eno1)]     indexes (0x0 0x3 0x0 0x0 0x0 0x0 0x0 0x0)pf_id(0x0)  vf_id(0xff)  vf_valid(0x0) vnic_id(0x0)  same_igu_sb_1b(0x1) state(0x1)
[   12.960777] bnx2x 0000:03:00.0 eno1: bc 7.13.75
[   12.967987] bnx2x: [bnx2x_mc_assert:720(eno1)]XSTORM_ASSERT_LIST_INDEX 0x2
[   12.970384] bnx2x: [bnx2x_mc_assert:736(eno1)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x00000100 0x00020017 0x0001005f
[   12.972282] bnx2x: [bnx2x_mc_assert:750(eno1)]Chip Revision: everest3, FW Version: 7_13_1
[   12.974456] bnx2x: [bnx2x_panic_dump:1186(eno1)]end crash dump -----------------
[   18.652124] NETDEV WATCHDOG: eno1 (bnx2x): transmit queue 5 timed out
[   18.652234]  spl(O) btrfs xor zstd_compress raid6_pq usbmouse hid_generic usbkbd usbhid hid psmouse ahci libahci bnx2x hpsa ptp scsi_transport_sas pps_core mdio libcrc32c
[   20.672063] bnx2x: [bnx2x_clean_tx_queue:1205(eno1)]timeout waiting for queue[4]: txdata->tx_pkt_prod(1) != txdata->tx_pkt_cons(0)
[   22.696871] bnx2x: [bnx2x_clean_tx_queue:1205(eno1)]timeout waiting for queue[5]: txdata->tx_pkt_prod(1) != txdata->tx_pkt_cons(0)
[   24.726713] bnx2x: [bnx2x_clean_tx_queue:1205(eno1)]timeout waiting for queue[6]: txdata->tx_pkt_prod(1) != txdata->tx_pkt_cons(0)
[   26.763994] bnx2x: [bnx2x_clean_tx_queue:1205(eno1)]timeout waiting for queue[4]: txdata->tx_pkt_prod(1) != txdata->tx_pkt_cons(0)
[   28.802119] bnx2x: [bnx2x_clean_tx_queue:1205(eno1)]timeout waiting for queue[5]: txdata->tx_pkt_prod(1) != txdata->tx_pkt_cons(0)
[   30.844264] bnx2x: [bnx2x_clean_tx_queue:1205(eno1)]timeout waiting for queue[6]: txdata->tx_pkt_prod(1) != txdata->tx_pkt_cons(0)
[   30.848139] bnx2x: [bnx2x_del_all_macs:8501(eno1)]Failed to delete MACs: -5
[   30.848208] bnx2x: [bnx2x_chip_cleanup:9321(eno1)]Failed to schedule DEL commands for UC MACs list: -5
[   30.872178] bnx2x: [bnx2x_func_stop:9080(eno1)]FUNC_STOP ramrod failed. Running a dry transaction
[   31.614923] bnx2x 0000:03:00.0 eno1: using MSI-X  IRQs: sp 103  fp[0] 105 ... fp[7] 112
[   31.728698] bnx2x: [bnx2x_nic_load:2754(eno1)]Function start failed!

It does honestly not make a lot of sense to me, but maybe others can make sense of this
 
I have the same problem on the QLogic Card (BCM57810).
And it is still present on latest Proxmox VE release (Linux 5.0.15-1-pve #1 SMP PVE 5.0.15-1).
 
Here also with proxmox VE 6
adapter HP Ethernet 10Gb 2-port 530 FLR-SFP+ Adapter part number 647581-B21 have the same problem when vlan aware is activated.
adapter HP FlexFabric 10Gb 2port 534FLR-SFP+ Adapter part number 700751-B21 booted ok with vlan aware setting activated.

Both servers are HP DL380p Gen8.
 
Same problem here in 6.1
Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) fails (bnx2x module)
QLogic Corp. cLOM8214 1/10GbE Controller (rev 54)

Both are supposed to be HP 530SFP+.
 
Anyone ever solve this? I have a HP 530SFP+. on a fresh install of proxmox ve 6.3-2 and both interfaces are listed as "unknown" though it saw them during the install just fine.
 
Same issue here, latest PROXMOX.

BCM57810

root@pve:~# pveversion
pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)



Bash:
root@pve:~# ethtool eno1
Settings for eno1:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: Unknown!
        Duplex: Unknown! (255)
        Port: FIBRE
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: off
        Supports Wake-on: g
        Wake-on: g
        Current message level: 0x00000000 (0)

        Link detected: no
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!