VMs not connecting after change to vmbr0

KatrinaD

New Member
Aug 25, 2025
3
0
1
PVE_VMBR.png

My intial setup used bond0 for vmbr0. It worked but was slow. Today I installed a new 10 GbE NIC (ens3d1). I set vmbr0 to use ens3d1 instead of bond0, and that works for accessing PVE/cluster, but the VMs would not connect. I created a new vmbr1 using bond0. When I change the hardware configuration of the virtual NIC to use vmbr1, the VMs connect. When I set them to use vmbr0, they have no network connection. I've rebooted the PVE host and the VMs several times with no change. What am I doing wrong?
 
Yes, eno1-4 are on an HP Ethernet 1Gb 4-port 331i Adapter which has firmware 17.4.41.

ens3 (empty slot) and ens3d1 are on an HP Ethernet 10G 2-port 546SFP+ Adapter which has firmware Mellanox ConnectX3Pro 2.40.5030. (I'm using old hardware in the process of migrating from ESXi to PVE.)

PVE version is 8.4.0. (Trying to get my cluster all set before I update everything and start upgrading to PVE9.)
 
Last edited:
I've gathered more info and I had assumed that being able to ping the IP assigned to vmbr0 meant that connection was working. I get the same ping and web interface response with the fiber cable disconnected, so my assumption that the link worked wasn't valid. I'm going back to "what transceivers are compatible with the NIC" now.
 
Good find — that explains it. If you still get a response with the fiber unplugged, the link was never actually active. Check that your SFP+ modules or DAC cables are compatible with the HP 546SFP+ (Mellanox ConnectX-3 Pro), and run ethtool ens3d1 to confirm the link status. Once the link comes up, vmbr0 and your VMs should connect normally.