Proxmox on VRTX

UnityHealth

New Member
Jun 7, 2024
14
2
3
I am trying to get ProxMox up and running on a VRTX cluster. The issue I am running into is that the mezzanine network adapters are not showing up for ProxMox. I can see the add in cards in the chassis, just not the other network adapters. They DO show up in the blade bios. Has anyone had any experience using ProxMox on a VRTX chassis?
 
No, yet I always wondered how those single chassis HA solutions work under the hood. Have you tried in other Linux distros like Debian if the network adapters show up?
 
I haven't tried any other Linux, but it does work with Windows.
You could try booting the system with the Ubuntu 24.04 installer (but don't install, just to try out) to check if they show up (and work). If not, then it's not Proxmox specific and it could be a generic Linux kernel or driver issue or a system configuration issue.
 
Good point.

As a data point for the Proxmox installation, you can switch to a terminal prompt for typing commands with um.. Control-Alt-F3 I think. Control-Alt-F4 after that will switch back to the main installer screen.

There's a few Control-Alt-F<something> combinations for different things in the installation. Pretty sure it's the F3 and F4 ones you'd want for this. :)
 
Last edited:
That's weird. Although that's output from lspci, it doesn't appear to have the right bits in it.

Are you super sure you ran lspci with the -nnk option?

It looks a lot more like the output of running lspci -m. :confused:
 
Last edited:
Yep, the output of that looks better. :)

Looking through it, two separate lots of 10GbE adapters (2 in each lot, 4 in total) are in there:

Code:
01:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet [14e4:168e] (rev 10)
        DeviceName: NIC1
        Subsystem: Dell NetXtreme II BCM57810 10 Gigabit Ethernet [1028:1f5f]
        Kernel driver in use: bnx2x
        Kernel modules: bnx2x
01:00.1 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet [14e4:168e] (rev 10)
        DeviceName: NIC2
        Subsystem: Dell NetXtreme II BCM57810 10 Gigabit Ethernet [1028:1f5f]
        Kernel driver in use: bnx2x
        Kernel modules: bnx2x

and:

Code:
10:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet [14e4:168e] (rev 10)
        Subsystem: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet [14e4:1006]
        Kernel modules: bnx2x
10:00.1 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet [14e4:168e] (rev 10)
        Subsystem: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet [14e4:1006]
        Kernel modules: bnx2x

Is that all of the one's you're expecting, or should there be more?
 
There are supposed to be 4. I am only getting 2 of those available to use for some reason, the lower two. The Dell adapters aren't enumerated.
 
Interesting. Would you be ok to paste the output from dmesg | grep -i bnx2x here?

If that doesn't show anything, then try journalctl -b 0 | grep -i bnx2x instead. :)
 
Hmmm, while you're looking at those bits, it wouldn't hurt to also do some grepping for the text fragment 'NIC'. It's interesting that the Dell cards are saying they're device name are "NIC1" and "NIC2". Don't think I've seen that before. :)
 
There are supposed to be 4. I am only getting 2 of those available to use for some reason, the lower two. The Dell adapters aren't enumerated.
[ 6.000741] bnx2x 0000:01:00.0: msix capability found
[ 6.000908] bnx2x 0000:01:00.0: part number 0-0-0-0
[ 6.137273] bnx2x 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 6.137306] bnx2x 0000:01:00.1: msix capability found
[ 6.137480] bnx2x 0000:01:00.1: part number 0-0-0-0
[ 6.272458] bnx2x 0000:01:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 6.272489] bnx2x 0000:10:00.0: msix capability found
[ 6.272669] bnx2x 0000:10:00.0: Chip read returns all Fs. Preventing probe from continuing
[ 6.272717] bnx2x: probe of 0000:10:00.0 failed with error -22
[ 6.272729] bnx2x 0000:10:00.1: msix capability found
[ 6.272887] bnx2x 0000:10:00.1: Chip read returns all Fs. Preventing probe from continuing
[ 6.272941] bnx2x: probe of 0000:10:00.1 failed with error -22
[ 8.153250] bnx2x 0000:01:00.0 eno1: renamed from eth0
[ 8.185220] bnx2x 0000:01:00.1 eno2: renamed from eth1
[ 12.990567] bnx2x 0000:01:00.0 eno1: entered allmulticast mode
[ 13.573734] bnx2x 0000:01:00.0 eno1: using MSI-X IRQs: sp 54 fp[0] 56 ... fp[9] 65
[ 13.655501] bnx2x 0000:01:00.0 eno1: NIC Link is Up, 1000 Mbps full duplex, Flow control: none
[ 194.934161] bnx2x 0000:01:00.0 eno1: entered promiscuous mode
[ 260.035763] bnx2x 0000:01:00.0 eno1: left promiscuous mode
[ 261.189439] bnx2x 0000:01:00.0 eno1: entered promiscuous mode
[ 286.552820] bnx2x 0000:01:00.0 eno1: left promiscuous mode
[ 287.701650] bnx2x 0000:01:00.0 eno1: entered promiscuous mode
 
Interesting. Are you sure it's the two Dell cards that aren't showing up, not the other two?

Asking because the PCIe addresses in these lines showing errors (10:00.0, 10:00.1) match the non Dell cards:

[ 6.272669] bnx2x 0000:10:00.0: Chip read returns all Fs. Preventing probe from continuing
[ 6.272717] bnx2x: probe of 0000:10:00.0 failed with error -22
[ 6.272887] bnx2x 0000:10:00.1: Chip read returns all Fs. Preventing probe from continuing
[ 6.272941] bnx2x: probe of 0000:10:00.1 failed with error -22

Just after that, there's these two lines, and their PCIe addresses (01:00.0, 01:00.1) do seem to match the Dell cards:

[ 8.153250] bnx2x 0000:01:00.0 eno1: renamed from eth0
[ 8.185220] bnx2x 0000:01:00.1 eno2: renamed from eth1

That's what it looks like anyway. Reality though sometimes doesn't really care what things "look like", thus me asking if you're sure. :)
 
This is what I see when I run ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 54:9f:35:77:1c:a1 brd ff:ff:ff:ff:ff:ff
altname enp1s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 54:9f:35:77:1c:a4 brd ff:ff:ff:ff:ff:ff
altname enp1s0f1
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 54:9f:35:77:1c:a1 brd ff:ff:ff:ff:ff:ff

They don't show up in the web console either as options. I was wanting to use one of the Dell adapters for management, but I can't seem to find them to assign them, if that makes sense.
 
  • Like
Reactions: justinclift
I'm pretty sure (but not 100%) that the eno1 and eno2 are the two Dell adapters.

Those names in the ip link show match up exactly with the names reported in the kernel output you pasted, which in turn match up exactly with the PCIe addresses of the Dell adapters. ;)
 
Ahhh.

Don't give up just yet, you're 99.5% of the way there.

Edit this line in the file /etc/default/grub:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"

Change it to this:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet pci=realloc=off"

Then run update-grub, and reboot.

If the above fix works, those two extra adapters will show up.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!