I hope someone can give me some clarity about PCIe Lanes.
Situation with a Mellanox X3 10GB network card.
I have a PCIe 4.0x8 slot working on 4 lanes (CPU lanes)
The duo Mellanox card uses a 3.0x8 connection.
As far as I know 10 Gbps lets you transfer at a rate of 1.25 GB/s...
I have been trying to debug a weird issue with with a secondary NIC+bridge. I use a Mellanox ConnectX-3 10Gbit interface card for the secondary NIC.
I am trying to set up two vmbrX pointing to two different NICs where both are on different subnet, the VLAN is handled at switch to make debugging...
Just bought this card to do some speed test between nodes and vms and validate them for production use.
Dell 19RNV Mellanox CX322A
But in network tab, i can't see it :/
I know the card is recognized by my server
root@proxmox2:~# lspci |grep -i mellanox
06:00.0 Ethernet controller...
I have 2 servers with each one Mellanox ConnectX-4 100 GbE QSFP installed. >I connected the 2 with a Mellanox 100gbe DAC cable. and both idracs show me a link with of 100gbit/s. However when I run ipref3 on both ends I only get around 34gbit/s. I have read that iperf3 might be limited...
Hallo an alle Netzwerk und Proxmox Experten, folgendes ist passiert:
Ich habe meinen Proxmox Home Server ein stromsparendes Upgrade gegönnt.
Nachdem ich diesen zusammengebaut hatte, die Mellanox Connect X2 Single Port Karte (Server) wieder mit meinem Hauptsystem Debian 11 Bullseye über ein SFP+...
Really grateful for some assistance, pulling my hair out here...
Setup: dell r730 with quad port broadcom nic (OEM) and a mellanox connectx-3, proxmox 7.2
VM: sophos xg with 3 vmbr (2 wan, 1 lan (trunk)).
Issue: Trunk port works fine when I use one of my Dell r730 integrated quad port...
Ok so I have these very old, and I mean OLD Mellanox cards. they do not use SFP or SFP+ but they use a connector as CX4. Not the Mellanox CX4 cards.. but the port..
anyway... it was a cheap way for me to get 10gb in my house on my gaming rig (Win10), plex server (Win10), Storage Server (Windows...
Proxmoxcluster with Mellanox ConnectX-4 Lx networkcards:
Worked under kernel 5.11, everything fine so far. After Upgrade to newest kernel 5.13 massive problems on networking. Impossible to dump sfp infos with ethtool, got bit errors massive problems with local ceph instance installed and so...
I'm setting up a new cluster using Mellanox ConnectX5 and Connect4LX cards. The ConnectX5 have not given me any issue (yet?), but the 4LX do not work that flawlessly.
After solving a very slow boot issue related to some old firmware version, now it seems that I won't be able to use...
Hello guys, I would like to ask one thing: we would need SR-IOV enabled on our mellanox card but pve 6.3 (debian 10.6) is not listed among supported OSes here https://www.mellanox.com/support/mlnx-ofed-matrix and pve comes only in 6.3 version AFAICS http://download.proxmox.com/iso/ ... any idea...
A few information about the System:
Its a Hyperconverged Cluster of 5 Supermicro AS -1114S-WN10RT:
4 of the Servers have:
CPU: 128 x AMD EPYC 7702P 64-Core Processor (1 Socket)
RAM: 512 GB
1 of the Servers has:
64 x AMD EPYC 7502P 32-Core Processor (1 Socket)
RAM: 256 GB
I'm fighting with network setup, not sure if this config can work, so would be nice if you can share info or your config regarding similar setup.
System: HPE ML350 Gen9
Network card: HPE 546SFP+ (MLX312B) - 2-port SFP+ (part number: 779793-B21)
Bridging: using OVS...
Hi, I'm trying to get a Mellanox ConnectX-3 (MCX311A-XCAT, CX311A) card to work with proxmox. When using lspci the card doesn't show up at all. I've searched everywhere but cant find where to go from here.
I'm new to Proxmox having been running VMWare since maybe 3.5 or something. Decided to switch over because I wanted to take the cheaper path to Infiniband and then found the support in VMWare not quite there.
My setup now is Proxmox server with a dual-port Mellanox ConnectX-3 card. My...
we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration.
We purchased 8 nodes with the following configuration:
- ThomasKrenn 1HE AMD Single-CPU RA1112
- AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB)
- 512 GB RAM
- 2x 240GB SATA...
Hi, I was planning on switching from just using Ubuntu Server 18.04 to Proxmox 6.1 and virtualizing my existing Ubuntu server. I've ran in to an issue however. When trying to install Proxmox I get the error No network adapters found. The board I am using is an old z97 board with a core i5...
i'm testing the performance over two nodes connected by two Mellanox:
MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
On Proxmox 6 last version i have installed all pakages:
apt-get install rdma-core libibverbs1 librdmacm1 libibmad5 libibumad3 librdmacm1 ibverbs-providers...
I'm having issues getting iSER over iWARP to work properly with Proxmox (or any other OS) and Intel has been very unhelpful about getting me to a working state. I do need the added bandwidth/latency offered by RDMA; so I'm looking into alternatives.
I'm making this post to get any input from...
I am testing latest Proxmox in our environment and discovered that I need to switch the Mellanox ConnectX-3 into Ethernet mode for it to work. When trying to install mft tools (from the "Getting started with Mellanox Firmware tools (MFT) for Linux" tutorial over on mellanox website), I get a...