[EDIT: Mellanox NIC not working] VMs not connecting after change to vmbr0

KatrinaD

New Member
Aug 25, 2025
4
0
1
PVE_VMBR.png

My intial setup used bond0 for vmbr0. It worked but was slow. Today I installed a new 10 GbE NIC (ens3d1). I set vmbr0 to use ens3d1 instead of bond0, and that works for accessing PVE/cluster, but the VMs would not connect. I created a new vmbr1 using bond0. When I change the hardware configuration of the virtual NIC to use vmbr1, the VMs connect. When I set them to use vmbr0, they have no network connection. I've rebooted the PVE host and the VMs several times with no change. What am I doing wrong?


EDIT October 13 2026: after getting a compatible SFP+ module for the 10GbE NIC (still shows no carrier/down, rebooted host and swapped patch cables to be sure, no change) and spending a bunch of time trying to download drivers that no longer exist for download, I'm working on accepting that making this old NIC, made for a server that's 10 years old, isn't going to happen.
 
Last edited:
Yes, eno1-4 are on an HP Ethernet 1Gb 4-port 331i Adapter which has firmware 17.4.41.

ens3 (empty slot) and ens3d1 are on an HP Ethernet 10G 2-port 546SFP+ Adapter which has firmware Mellanox ConnectX3Pro 2.40.5030. (I'm using old hardware in the process of migrating from ESXi to PVE.)

PVE version is 8.4.0. (Trying to get my cluster all set before I update everything and start upgrading to PVE9.)
 
Last edited:
I've gathered more info and I had assumed that being able to ping the IP assigned to vmbr0 meant that connection was working. I get the same ping and web interface response with the fiber cable disconnected, so my assumption that the link worked wasn't valid. I'm going back to "what transceivers are compatible with the NIC" now.
 
Good find — that explains it. If you still get a response with the fiber unplugged, the link was never actually active. Check that your SFP+ modules or DAC cables are compatible with the HP 546SFP+ (Mellanox ConnectX-3 Pro), and run ethtool ens3d1 to confirm the link status. Once the link comes up, vmbr0 and your VMs should connect normally.
 
Thanks for your help, @readyspace

After getting a compatible SFP+ module (HPE 455883-B21) for the 10GbE NIC, it still shows no carrier/down. I rebooted host and swapped patch cables to be sure. No change. After spending a bunch of time trying to download drivers which no longer exist for download, I'm working on accepting that using this old NIC, made for a server that's 10 years old, isn't going to happen.

Settings for ens3d1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseKX4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseKX4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: off
Port: FIBRE
PHYAD: 0
Transceiver: internal
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000014 (20)
link ifdown
Link detected: no