2 x 10GB 2 port cards don't work in VMs one is Fibre.

hzk916

Renowned Member
Feb 24, 2015
36
0
71
Hi
Trying to get both of these NICs to work on a Lenovo SR850 v2 on Proxmox 8.2.2, have all updates loaded. Server Bios firmware has been updated to latest 227.1.115.0 for both of these cards.
Adapter: Broadcom NX-E PCIe 10Gb 2-Port Base-T Ethernet Adapter (PCI Slot 1)
Adapter: Broadcom 57414 10/25GbE SFP28 2-port PCIe Ethernet Adapter (PCI Slot 2)
The 1GB 4 x port card works fine on any VM. (Adapter: Broadcom 5719 1GbE RJ45 4-port OCP Ethernet Adapter (PCI Slot 4))
Have been following several threads no luck so far. Is there an apt-get update for broadcom 57414?
I'll post lspci -v next
 
"BCM57414 NetXtreme-E 10Gb/25Gb" works here on 10GB (tested with DAG and multimode 10GB)
Proxmox 8.2.2 fresh install without any additional drivers on Kernel 6.8.4-3-pve
ThinkSystem SR650 V2

Code:
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: ens2f0np0 (primary_reselect always)
Currently Active Slave: ens2f0np0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: ens2f0np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 04:32:01:xx:xx:xx
Slave queue ID: 0

What is the output of "ip a"?
 
Last edited:
"BCM57414 NetXtreme-E 10Gb/25Gb" works here on 10GB (tested with DAG and multimode 10GB)
Proxmox 8.2.2 fresh install without any additional drivers on Kernel 6.8.4-3-pve
ThinkSystem SR650 V2

Code:
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: ens2f0np0 (primary_reselect always)
Currently Active Slave: ens2f0np0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: ens2f0np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 04:32:01:xx:xx:xx
Slave queue ID: 0

What is the output of "ip a"?
root@qbgw:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens4f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether d4:04:e6:0f:24:b8 brd ff:ff:ff:ff:ff:ff
altname enp174s0f0
3: ens1f0np0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 04:32:01:d7:c0:90 brd ff:ff:ff:ff:ff:ff
altname enp216s0f0np0
4: ens4f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:04:e6:0f:24:b9 brd ff:ff:ff:ff:ff:ff
altname enp174s0f1
5: ens4f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:04:e6:0f:24:ba brd ff:ff:ff:ff:ff:ff
altname enp174s0f2
6: ens4f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:04:e6:0f:24:bb brd ff:ff:ff:ff:ff:ff
altname enp174s0f3
7: ens1f1np1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 8600 qdisc mq master vmbr2 state DOWN group default qlen 1000
link/ether 04:32:01:d7:c0:91 brd ff:ff:ff:ff:ff:ff
altname enp216s0f1np1
8: ens2f0np0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 04:32:01:de:fa:f0 brd ff:ff:ff:ff:ff:ff
altname enp217s0f0np0
9: ens2f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether 04:32:01:de:fa:f1 brd ff:ff:ff:ff:ff:ff
altname enp217s0f1np1
10: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d4:04:e6:0f:24:b8 brd ff:ff:ff:ff:ff:ff
inet 10.0.75.99/16 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::d604:e6ff:fe0f:24b8/64 scope link
valid_lft forever preferred_lft forever
11: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8600 qdisc noqueue state UP group default qlen 1000
link/ether 04:32:01:d7:c0:91 brd ff:ff:ff:ff:ff:ff
inet 10.0.75.97/16 scope global vmbr2
valid_lft forever preferred_lft forever
inet6 fe80::632:1ff:fed7:c091/64 scope link
valid_lft forever preferred_lft forever
12: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 8600 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
link/ether 9a:97:60:83:d2:26 brd ff:ff:ff:ff:ff:ff
13: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8600 qdisc noqueue state UP group default qlen 1000
link/ether 3e:a3:86:1d:8e:d9 brd ff:ff:ff:ff:ff:ff
14: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8600 qdisc noqueue master vmbr2 state UP group default qlen 1000
link/ether 36:d8:13:e2:54:20 brd ff:ff:ff:ff:ff:ff
15: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8600 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
link/ether 3e:a3:86:1d:8e:d9 brd ff:ff:ff:ff:ff:ff
16: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 04:32:01:de:fa:f1 brd ff:ff:ff:ff:ff:ff
inet 10.0.75.98/16 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::632:1ff:fede:faf1/64 scope link
valid_lft forever preferred_lft forever
17: tap101i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i1 state UNKNOWN group default qlen 1000
link/ether b2:38:43:15:6c:86 brd ff:ff:ff:ff:ff:ff
18: fwbr101i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 46:65:11:e3:56:c6 brd ff:ff:ff:ff:ff:ff
19: fwpr101p1@fwln101i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
link/ether f2:f5:dc:31:4a:8b brd ff:ff:ff:ff:ff:ff
20: fwln101i1@fwpr101p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i1 state UP group default qlen 1000
link/ether 46:65:11:e3:56:c6 brd ff:ff:ff:ff:ff:ff
21: tap101i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i2 state UNKNOWN group default qlen 1000
link/ether 16:d6:15:db:0e:c6 brd ff:ff:ff:ff:ff:ff
22: fwbr101i2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:3c:d7:74:81:83 brd ff:ff:ff:ff:ff:ff
23: fwpr101p2@fwln101i2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether d2:57:e2:e0:07:80 brd ff:ff:ff:ff:ff:ff
24: fwln101i2@fwpr101p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i2 state UP group default qlen 1000
link/ether 02:3c:d7:74:81:83 brd ff:ff:ff:ff:ff:ff
 
Code:
8: ens2f0np0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 04:32:01:de:fa:f0 brd ff:ff:ff:ff:ff:ff
altname enp217s0f0np0

Looks like the device is here. Physical problem? Do you have link on the card? Did you configure/bind the device ens2f0np0? Did you connect the card to a switch? Directly to another host? With SFP+? Cat6? DAC?
What are the outputs of:
Code:
ethtool ens2f0np0
Code:
cat /etc/network/interfaces
 
Last edited:
ens2f0np0 is not connected... but ens2f1np1 is connected to the Fibre.

Here is the output of ethtool ens2f0np0


oot@qbgw:~# ethtool ens2f0np0
Settings for ens2f0np0:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full
10000baseT/Full
1000baseKX/Full
10000baseKX4/Full
10000baseKR/Full
25000baseCR/Full
25000baseKR/Full
25000baseSR/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
10000baseLR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: RS BASER
Advertised link modes: 1000baseT/Full
10000baseT/Full
1000baseKX/Full
10000baseKX4/Full
10000baseKR/Full
25000baseCR/Full
25000baseKR/Full
25000baseSR/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
10000baseLR/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: on
Port: FIBRE
PHYAD: 1
Transceiver: internal
Supports Wake-on: d
Wake-on: d
Current message level: 0x00002081 (8321)
drv tx_err hw
Link detected: no
 
Output of cat /etc/network/interfaces

root@qbgw:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto ens4f0
iface ens4f0 inet manual
#1Gb Cat5

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

iface ens1f0np0 inet manual

auto ens1f1np1
iface ens1f1np1 inet manual
mtu 8600
#10gb CAT6

iface ens2f0np0 inet manual

auto ens2f1np1
iface ens2f1np1 inet manual
mtu 8900
#10gb SFP Fibre

auto vmbr0
iface vmbr0 inet static
address 10.0.75.99/16
bridge-ports ens4f0
bridge-stp off
bridge-fd 0
#1Gb CAT5

auto vmbr2
iface vmbr2 inet static
address 10.0.75.97/16
bridge-ports ens1f1np1
bridge-stp off
bridge-fd 0
mtu 8600
#10Gb Cat6

auto vmbr1
iface vmbr1 inet static
address 10.0.75.98/16
bridge-ports ens2f1np1
bridge-stp off
bridge-fd 0
#10GB Fibre

source /etc/network/interfaces.d/
 
Getting partial pings back on the Fibre but none on the 10GB Cat6

View attachment 69367
ok, let's check the fibre first. Do you have SFP+modules from Lenovo?
Are they multimode or singlemode? It's IMPORTANT that you have the correct fo-cable. They are different from multimode and singlemode. Also LR/LC could be important at 10Gb/s
 
Yes the card and modules were ordered from Lenovo. All are multimode going to a Zyxel switch with a Multimode SFP. Blue light on at both ends would sugest a physical connection. Not sure about the LR/LC. Fibre patch lead is around 100m long back to the switch in another room.

Here I am pinging Proxmox from the Win11 VM and getting the same intermittent ping, that should rule out the physical SFP Fibre connection. Which might suggest that it is more of a driver or mayb an install problem.
1717701019941.png
 
You didn't answer the question, if you are using multimode cables. You can also connect multimode modules with singlemode cables. But the result would be... what you have. Just encountered the same situation with same Lenovo Server, ZyXEL XGS-Switch and the same network-card when i didn't check the sfp+ modules Lenovo delivered and i was using a singlemode cable. Ordered multimode cable and everything worked like a charm.

Could you do the ping test from an external device to your Proxmox VM-Host, too?
The timeouts for "internal" communication from host to guest could be because of the fact that you are losing the link on your network-device.
 
Last edited:
  • Like
Reactions: Kingneutron
These are the cables I ordered. I only order multimode for this site because the main 300m cable joining the the two buildings is a multimode.
This cable is very old and I did not expect it to run at 10GB, but it dose.
1717709929567.png
This is a ping test from another Proxmox Win10 VM (i7 PC) back to a Win11 VM on the SR850
1717712754318.png
 
These are the cables I ordered. I only order multimode for this site because the main 300m cable joining the the two buildings is a multimode.
This cable is very old and I did not expect it to run at 10GB, but it dose.
View attachment 69392
This is a ping test from another Proxmox Win10 VM (i7 PC) back to a Win11 VM on the SR850
Apparently the old cable does not manage 10GbE after all.
With OM3 cables you can run up to 300m 10GBit.
With OM4 up to 400m.
Old 62.5/125 = OM1 cables are not certified but should be able to run up to 30m.
OM2 can run up to 82m with SR modules 10 GBit. With LRM modules up to 220m, but this requires special cables.

If you put 10GBit on a 300m OM2 line, you can get a link, but the signal is extremely weak and many packets arrive defective.
This could be the cause of the dropouts.
 
Just to explain the setup. The Lenovo SR850 is replacing the current Lenovo X6 (Live Server). We are still only testing the SR850. The network is in place and both servers are connected to it. The Lenovo X6 does not have any 10GB NICs (only 1Gb CAT5) but is connected to across the same 300m old fibre and all speeds are fine. The SR850 has 4 x 1GB NIC and it connects across the old 300m fibre fine. Both the X6 and the SR850 are in the same cabinet but the i7 PC/Server (Proxmox) is on the other side of the old 300m Fibre cable.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!