Here is how i was able to get proxmox working with Infiniband and SR-IOV.
Hardware used is a mellanox switch (sx6036) and a mellanox Cx-4 100gbps EDR dual (or single) port card. Make sure the firmware is latest.
AS FAR AS IM AWARE, THIS WILL NOT WORK WITH OPENSM AND MUST HAVE A MELLANOX SWITCH...
Hi, PVE geekers:
I build a pve cluster on three server (with ceph), with pve & ceph package version as follow:
root@node01:~# pveversion
pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve)
root@node01:~# ceph --version
ceph version 16.2.13 (b81a1d7f978c8d41cf452da7af14e190542d2ee2)...
Hi all,
We are looking in to deploying a new refurbished NVME HCI Ceph Proxmox cluster.
At this point we look at 7 nodes, each with 2 NVME OSD drives, with expansion for 2 NVME OSD's more.
As we would quickly saturate a 25GbE link we should be looking in to 40/50/100 GbE links and switches...
I am trying to build OFED drivers from source for my ConnectX3 Pro cards so that I can make one port 40/56 Gb Inifiniband and the second port as 40 Gb Ethernet.
This configuration requires OFED drivers as far as I understand. mlx4_core and mlx4_en wont work simultaneously that I am aware of -...
Hallo Forum,
ich habe das Problem, dass meine Infiniband-Karte (Mellanox X3), die ich für IPoIB nutze in der Proxmox GUI immer als "Unbekann" deklariert ist und dich die nur über die Shell bearbeiten kann.
Ist dieses Problem bekannt?
Hi all,
I'm new to Proxmox having been running VMWare since maybe 3.5 or something. Decided to switch over because I wanted to take the cheaper path to Infiniband and then found the support in VMWare not quite there.
My setup now is Proxmox server with a dual-port Mellanox ConnectX-3 card. My...
Hello Fellas,
Since there are many many many posts about people trying to get cheap Mellanox ConnectX VPI 40GB/s working on Proxmox and I really had a hard time getting mine to work, I am willing to share my Insights, and hope to make some peoples days.
At First. I am running a cluster of 4...
I am upgrading a 16 node cluster that has 2 NVMe drives and 3 SATA drives used for ceph. My network cards are Mellanox MCX354A-FCBT and have 2 QSFP ports that can be configured as Infiniband or Ethernet. My question is how best should I utilize the two ports. My options are:
1) LACP into VPC...
Hello,
i'm testing the performance over two nodes connected by two Mellanox:
MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
On Proxmox 6 last version i have installed all pakages:
apt-get install rdma-core libibverbs1 librdmacm1 libibmad5 libibumad3 librdmacm1 ibverbs-providers...
I have a Mellanox ConnectX-3 card configured with VFs connected to Voltaire 4036's that are running my subnet manager. I have ib_ipoib loaded on the Proxmox host and can successfully assign an IP to the VF and ping my other IB hosts. When I pass the VF to a [un]priviliged LXC container with or...
Hi,
I have 2x ConnectX-2 Cards and need to setup infiniband, RDMA over NFS on Proxmox 5.4. I followed a few guides online but I'm unable to change protocol type from ETH to IB. What is correct way of setting the port type on Proxmox?
Hi,
first, merry christmas guys!
I just installed 2 mellanox infiniband cards (40Gb Connectx-2) each on on server and installed after that the latest mellanox drivers from here:
http://www.mellanox.com/page/products_dyn?product_family=27
Finally i made a test with iperf but i get only this...
Hello all, I'm hoping you maybe able to help me as I seem a bit in over my head this time.
I have three physical servers that I am working to build into an HA cluster. Two of those servers will be the processing power (hereafter referred to as P1 and P2) of the cluster and one is the storage...
I have a Glusterfs system running over Infiniband which I'm testing with Proxmox 3.4 (soon to be upgraded to 4.4). The vzdumps work great but I'm wondering how I can run my VM images directly from the Glusterfs mount point without a huge performance degradation. I tried SATA disks with obvious...
Greetings,
This is my first post so please be gentle.
Has anybody had success in installing OFED for Debian Jessie? I tried the install script from Mellanox but it seems to be hard-coded to Debian 8.2 or 8.1. PVE 4.1 is on Debian 8.3, which is what I was running when I spent two days trying...
Hello,
is there any change to get SCST together with srpt and the required kernel-modules in the releases ?
Current scst compiles fine against Kernel 4.4.6 and in-tree ofed / rdma packages.
Having those packages / kernel modules in the release would skip the need to compile / build with each...
Hi.
I am sorry for my bad english :(
I have 2 cluster on the Proxmox VE ver. 3.4 and 4.1
All equipment is absolutely identical in both clusters
All nodes has 2*X5650 Xeon, 96Gb RAM and InfiniBand Card: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT / s - IB QDR / 10GigE] (rev b0)
Both...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.