Mellanox ConnectX-3 Issues

crainbramp

Active Member
Apr 11, 2019
7
1
43
56
East Coast, USA
gravityservers.com
Hello:

Running the latest Proxmox, have a ConnectX-3 dual port card in the server running as Ethernet. Cannot seem to bring up any interfaces on it. The bridge comes up, but doesn't assign any VLAN interfaces / IPs to it. Seems to bomb out at this point.

Firmware is the latest: 2.42.5000

Below are some output and some of the syslog. Would appreciate it if anyone could give an assist:

Code:
root@node03sea:~# pveversion
pve-manager/6.0-4/2a719255 (running kernel: 5.0.15-1-pve)

Code:
03:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]
        Subsystem: Mellanox Technologies MT27500 Family [ConnectX-3]
        Kernel driver in use: mlx4_core
        Kernel modules: mlx4_core
root@node03sea:~# lspci -vv -s 03:00.0 | grep "Part number" -A 3
                        [PN] Part number: MCX312A-XCBT
                        [EC] Engineering changes: A5
                        [SN] Serial number: MT1338X00222
                        [V0] Vendor specific: PCIe Gen3 x8

Code:
root@node03sea:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 2c:59:e5:41:f9:c0 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 2c:59:e5:41:f9:c1 brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 2c:59:e5:41:f9:c2 brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 2c:59:e5:41:f9:c3 brd ff:ff:ff:ff:ff:ff
6: ens2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000
    link/ether f4:52:14:0b:c6:50 brd ff:ff:ff:ff:ff:ff
7: ens2d1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether f4:52:14:0b:c6:51 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2c:59:e5:41:f9:c3 brd ff:ff:ff:ff:ff:ff
9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f4:52:14:0b:c6:50 brd ff:ff:ff:ff:ff:ff

Code:
auto vmbr1
iface vmbr1 inet manual
        bridge_ports ens2
        bridge_stp off
        bridge_fd 0
        bridge_vlan-aware yes

auto vmbr1.10
iface vmbr1.10 inet static
        address  10.0.44.243
        netmask  24

auto vmbr1.20
iface vmbr1.20 inet static
        address  10.0.45.243
        netmask  24

Code:
Dec 17 13:35:08 node03sea systemd[1]: Started Proxmox VE Login Banner.
Dec 17 13:35:08 node03sea ifup[1058]: Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
Dec 17 13:35:08 node03sea kernel: [   13.183604] mlx4_en: ens2d1: Link Up
Dec 17 13:35:08 node03sea kernel: [   13.238430] mlx4_en: ens2: Link Up
Dec 17 13:35:08 node03sea systemd-udevd[658]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 17 13:35:08 node03sea systemd-udevd[658]: Could not generate persistent MAC address for vmbr1: No such file or directory
Dec 17 13:35:08 node03sea kernel: [   13.313321] vmbr1: port 1(ens2) entered blocking state
Dec 17 13:35:08 node03sea kernel: [   13.313324] vmbr1: port 1(ens2) entered disabled state
Dec 17 13:35:08 node03sea kernel: [   13.313415] device ens2 entered promiscuous mode
Dec 17 13:35:08 node03sea kernel: [   13.329335] mlx4_en: ens2: Steering Mode 1
Dec 17 13:35:08 node03sea ifup[1058]: Waiting for vmbr1 to get ready (MAXWAIT is 2 seconds).
Dec 17 13:35:08 node03sea kernel: [   13.477898] device ens2 left promiscuous mode
Dec 17 13:35:08 node03sea ifup[1058]: RTNETLINK answers: No space left on device
Dec 17 13:35:08 node03sea ifup[1058]: run-parts: /etc/network/if-up.d/bridgevlan exited with return code 255
Dec 17 13:35:08 node03sea ifup[1058]: ifup: failed to bring up vmbr1
Dec 17 13:35:08 node03sea ifup[1058]: Waiting for vmbr1 to get ready (MAXWAIT is 2 seconds).
Dec 17 13:35:08 node03sea ifup[1058]: RTNETLINK answers: No space left on device
Dec 17 13:35:08 node03sea ifup[1058]: run-parts: /etc/network/if-up.d/bridgevlan exited with return code 255
Dec 17 13:35:08 node03sea ifup[1058]: ifup: failed to bring up vmbr1
Dec 17 13:35:08 node03sea ifup[1058]: ifup: could not bring up parent interface vmbr1
Dec 17 13:35:08 node03sea ifup[1058]: Waiting for vmbr1 to get ready (MAXWAIT is 2 seconds).
Dec 17 13:35:08 node03sea ifup[1058]: RTNETLINK answers: No space left on device
Dec 17 13:35:08 node03sea ifup[1058]: run-parts: /etc/network/if-up.d/bridgevlan exited with return code 255
Dec 17 13:35:08 node03sea ifup[1058]: ifup: failed to bring up vmbr1
Dec 17 13:35:08 node03sea ifup[1058]: ifup: could not bring up parent interface vmbr1
Dec 17 13:35:08 node03sea systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 13:35:08 node03sea systemd[1]: networking.service: Failed with result 'exit-code'.
Dec 17 13:35:08 node03sea systemd[1]: Failed to start Raise network interfaces.
 
The card only supports ~128 VLANs; see the "No space left on device" error message. Don't use the VLAN aware settings and you'll get further.
 
A little late, but thanks. Removing the offending section worked.

Also, a note for others: I had found the information concerning the 128 VLAN limit, and had changed my network/interfaces to include "bridge_vids 2-128" and this did not work. I simply assumed something else was wrong. It appears you have to remove this entire descriptor.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!