[EDIT: Mellanox NIC not working] VMs not connecting after change to vmbr0

KatrinaD

New Member
Aug 25, 2025
5
0
1
PVE_VMBR.png

My intial setup used bond0 for vmbr0. It worked but was slow. Today I installed a new 10 GbE NIC (ens3d1). I set vmbr0 to use ens3d1 instead of bond0, and that works for accessing PVE/cluster, but the VMs would not connect. I created a new vmbr1 using bond0. When I change the hardware configuration of the virtual NIC to use vmbr1, the VMs connect. When I set them to use vmbr0, they have no network connection. I've rebooted the PVE host and the VMs several times with no change. What am I doing wrong?


EDIT October 13 2026: after getting a compatible SFP+ module for the 10GbE NIC (still shows no carrier/down, rebooted host and swapped patch cables to be sure, no change) and spending a bunch of time trying to download drivers that no longer exist for download, I'm working on accepting that making this old NIC, made for a server that's 10 years old, isn't going to happen.
 
Last edited:
Yes, eno1-4 are on an HP Ethernet 1Gb 4-port 331i Adapter which has firmware 17.4.41.

ens3 (empty slot) and ens3d1 are on an HP Ethernet 10G 2-port 546SFP+ Adapter which has firmware Mellanox ConnectX3Pro 2.40.5030. (I'm using old hardware in the process of migrating from ESXi to PVE.)

PVE version is 8.4.0. (Trying to get my cluster all set before I update everything and start upgrading to PVE9.)
 
Last edited:
I've gathered more info and I had assumed that being able to ping the IP assigned to vmbr0 meant that connection was working. I get the same ping and web interface response with the fiber cable disconnected, so my assumption that the link worked wasn't valid. I'm going back to "what transceivers are compatible with the NIC" now.
 
Good find — that explains it. If you still get a response with the fiber unplugged, the link was never actually active. Check that your SFP+ modules or DAC cables are compatible with the HP 546SFP+ (Mellanox ConnectX-3 Pro), and run ethtool ens3d1 to confirm the link status. Once the link comes up, vmbr0 and your VMs should connect normally.
 
Thanks for your help, @readyspace

After getting a compatible SFP+ module (HPE 455883-B21) for the 10GbE NIC, it still shows no carrier/down. I rebooted host and swapped patch cables to be sure. No change. After spending a bunch of time trying to download drivers which no longer exist for download, I'm working on accepting that using this old NIC, made for a server that's 10 years old, isn't going to happen.

Settings for ens3d1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseKX4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseKX4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: off
Port: FIBRE
PHYAD: 0
Transceiver: internal
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000014 (20)
link ifdown
Link detected: no
 
Okay, I'm not as "given up" as previously stated. I've made progress in installing Mellanox tools.

root@pve1:~# dmesg | grep mlx
[ 1.863475] mlx4_core: Mellanox ConnectX core driver v4.0-0
[ 1.863513] mlx4_core: Initializing 0000:08:00.0
[ 8.091333] mlx4_core 0000:08:00.0: DMFS high rate steer mode is: disabled performance optimized steering
[ 8.091599] mlx4_core 0000:08:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
[ 8.320204] mlx4_en: Mellanox ConnectX HCA Ethernet driver v4.0-0
[ 8.320468] mlx4_en 0000:08:00.0: Activating port:1
[ 8.325133] mlx4_en: 0000:08:00.0: Port 1: Using 24 TX rings
[ 8.325138] mlx4_en: 0000:08:00.0: Port 1: Using 16 RX rings
[ 8.325338] mlx4_en: 0000:08:00.0: Port 1: Initializing port
[ 8.325749] mlx4_en 0000:08:00.0: registered PHC clock
[ 8.326166] mlx4_en 0000:08:00.0: Activating port:2
[ 8.329087] mlx4_en: 0000:08:00.0: Port 2: Using 24 TX rings
[ 8.329090] mlx4_en: 0000:08:00.0: Port 2: Using 16 RX rings
[ 8.329266] mlx4_en: 0000:08:00.0: Port 2: Initializing port
[ 8.341806] <mlx4_ib> mlx4_ib_probe: mlx4_ib: Mellanox ConnectX InfiniBand driver v4.0-0
[ 8.342477] <mlx4_ib> mlx4_ib_probe: counter index 2 for port 1 allocated 1
[ 8.342480] <mlx4_ib> mlx4_ib_probe: counter index 3 for port 2 allocated 1
[ 8.361472] mlx4_core 0000:08:00.0 ens3: renamed from eth0
[ 8.370715] mlx4_en: ens3: Link Up
[ 8.377511] mlx4_core 0000:08:00.0 ens3d1: renamed from eth1
[ 14.963670] mlx4_core 0000:08:00.0 ens3d1: entered allmulticast mode
[ 14.963751] mlx4_core 0000:08:00.0 ens3d1: entered promiscuous mode
[ 15.007907] mlx4_en: ens3d1: Steering Mode 1
[ 15.034243] mlx4_en: ens3d1: Link Down
[61496.754183] mlx4_core 0000:08:00.0 ens3d1: left allmulticast mode
[61496.754190] mlx4_core 0000:08:00.0 ens3d1: left promiscuous mode
[61496.895388] mlx4_core 0000:08:00.0 ens3d1: entered allmulticast mode
[61496.895441] mlx4_core 0000:08:00.0 ens3d1: entered promiscuous mode

But ethtool still shows the link as down. Lights on the switch and the transceiver module blink, but there's no link.