Hallo an alle Netzwerk und Proxmox Experten, folgendes ist passiert:
Ich habe meinen Proxmox Home Server ein stromsparendes Upgrade gegönnt.
Nachdem ich diesen zusammengebaut hatte, die Mellanox Connect X2 Single Port Karte (Server) wieder mit meinem Hauptsystem Debian 11 Bullseye über ein SFP+...
Really grateful for some assistance, pulling my hair out here...
Setup: dell r730 with quad port broadcom nic (OEM) and a mellanox connectx-3, proxmox 7.2
VM: sophos xg with 3 vmbr (2 wan, 1 lan (trunk)).
Issue: Trunk port works fine when I use one of my Dell r730 integrated quad port...
Ok so I have these very old, and I mean OLD Mellanox cards. they do not use SFP or SFP+ but they use a connector as CX4. Not the Mellanox CX4 cards.. but the port..
anyway... it was a cheap way for me to get 10gb in my house on my gaming rig (Win10), plex server (Win10), Storage Server (Windows...
Proxmoxcluster with Mellanox ConnectX-4 Lx networkcards:
Worked under kernel 5.11, everything fine so far. After Upgrade to newest kernel 5.13 massive problems on networking. Impossible to dump sfp infos with ethtool, got bit errors massive problems with local ceph instance installed and so...
I'm setting up a new cluster using Mellanox ConnectX5 and Connect4LX cards. The ConnectX5 have not given me any issue (yet?), but the 4LX do not work that flawlessly.
After solving a very slow boot issue related to some old firmware version, now it seems that I won't be able to use...
Hello guys, I would like to ask one thing: we would need SR-IOV enabled on our mellanox card but pve 6.3 (debian 10.6) is not listed among supported OSes here https://www.mellanox.com/support/mlnx-ofed-matrix and pve comes only in 6.3 version AFAICS http://download.proxmox.com/iso/ ... any idea...
A few information about the System:
Its a Hyperconverged Cluster of 5 Supermicro AS -1114S-WN10RT:
4 of the Servers have:
CPU: 128 x AMD EPYC 7702P 64-Core Processor (1 Socket)
RAM: 512 GB
1 of the Servers has:
64 x AMD EPYC 7502P 32-Core Processor (1 Socket)
RAM: 256 GB
I'm fighting with network setup, not sure if this config can work, so would be nice if you can share info or your config regarding similar setup.
System: HPE ML350 Gen9
Network card: HPE 546SFP+ (MLX312B) - 2-port SFP+ (part number: 779793-B21)
Bridging: using OVS...
Hi, I'm trying to get a Mellanox ConnectX-3 (MCX311A-XCAT, CX311A) card to work with proxmox. When using lspci the card doesn't show up at all. I've searched everywhere but cant find where to go from here.
I'm new to Proxmox having been running VMWare since maybe 3.5 or something. Decided to switch over because I wanted to take the cheaper path to Infiniband and then found the support in VMWare not quite there.
My setup now is Proxmox server with a dual-port Mellanox ConnectX-3 card. My...
we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration.
We purchased 8 nodes with the following configuration:
- ThomasKrenn 1HE AMD Single-CPU RA1112
- AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB)
- 512 GB RAM
- 2x 240GB SATA...
Hi, I was planning on switching from just using Ubuntu Server 18.04 to Proxmox 6.1 and virtualizing my existing Ubuntu server. I've ran in to an issue however. When trying to install Proxmox I get the error No network adapters found. The board I am using is an old z97 board with a core i5...
i'm testing the performance over two nodes connected by two Mellanox:
MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
On Proxmox 6 last version i have installed all pakages:
apt-get install rdma-core libibverbs1 librdmacm1 libibmad5 libibumad3 librdmacm1 ibverbs-providers...
I'm having issues getting iSER over iWARP to work properly with Proxmox (or any other OS) and Intel has been very unhelpful about getting me to a working state. I do need the added bandwidth/latency offered by RDMA; so I'm looking into alternatives.
I'm making this post to get any input from...
I am testing latest Proxmox in our environment and discovered that I need to switch the Mellanox ConnectX-3 into Ethernet mode for it to work. When trying to install mft tools (from the "Getting started with Mellanox Firmware tools (MFT) for Linux" tutorial over on mellanox website), I get a...
first, merry christmas guys!
I just installed 2 mellanox infiniband cards (40Gb Connectx-2) each on on server and installed after that the latest mellanox drivers from here:
Finally i made a test with iperf but i get only this...
Just migrated ceph totally to bluestore
a test with windows2016 Server has good results, but i think limiting component is virtio driver!
see also https://forum.proxmox.com/threads/virtio-ethernet-driver-speed-10gbite.35881/ concering Ethernet speed ...
I see no "tunables" to...
Whis performanche shoud i expect for this cluster ? are my settings ok ?
system: Supermicro 2028U-TN24R4T+
2 port Mellanox connect x3pro 56Gbit
4 port intel 10GigE
memory: 768 GBytes
CPU DUAL Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
ceph: 28 osds
24 Intel Nvme 2000GB...