I just got answer back from Mellanox that DMA on their cards is set via the kernel/CPU so nothing to do manually apparently. When we upgraded the firmward for 12 NICs all of them complained that DMA wasn't enabled. We are currently scratching our heads .. that's why I started this thread. I'm...
Ok, because we swapped out 10Gb Base-T cards for these 25Gb and are seeing lower performance. Everything was much snappier with the 10 Gb and our Ceph is running entirely on NVMe drives which should be able to nearly saturate the 25Gb link if not entirely. In trying to find the answer, we can't...
No, RDMA is "Remote Direct Memory Access" .. talking about standard DMA on PCIe bus where card can read/write direct to memory. While doing firmware upgrades all Mellanox cards read not having DMA enabled
Also, since the OFED drivers don't yet exist for Debian 11, is there anything else to be...
Since Mellanox doesn't yet support Debian 11, what are folks doing to get maximum performance out of their Mellanox fiber NICs?
Also, does anyone know how to turn Direct Memory Access on for Mellanox Card model: CX4121A? This is a 25Gb SFP28 dual port card.
Thanks in advance
By the way, I did indeed do the firmware update on all cards installed .. on one server we removed the Mellanox card and installed an Intel card for testing purposes but otherwise, all other servers have had the Mellanox firmware updated and the server was rebooted just yesterday 8/17/2021
The...
Please don't get me wrong .. I'm not trying to say Aaron is wrong .. just trying to get confirmation of solid hardware compatibility
We've spent months and countless hours trying to get Ceph switched over to 25Gb fiber vs 10Gb copper to no avail ... so was trying to make 100% sure that the...
That's not exactly what I asked, but thank you Aaron
We had a network pro test the Mellanox cards on Debian Linux and they worked well .. once installed in Proxmox, they did not. That doesn't sound like a need for a firmware upgrade. We also know that Proxmox switched to using the Ubuntu kernel...
We have been grappling with getting our Hyperconverged Ceph cluster moved over from 10Gb copper to 25Gb optical fiber for months now. We have been entirely unsuccessfully to this point. Is there a hardware compatibility list to find out about whether the Network cards for 25Gb we are trying to...
By the way, the Intel NICs we installed to test with are these
Intel XXV710 Dual Port 25GbE SFP28/SFP+ PCIe Adapter
Are these fully supported on Proxmox 6.4 and 7.0? at full 25Gb speeds and functionality?
I ask because if it turns out we have to remove the Mellanox cards from all servers and...
Ok .. an update on this .. we just ordered Intel 25Gb cards (two for testing) and installed them in a couple of the servers
I was able to assign IPs and copy large files back and forth. I have been unable to do this with the Mellanox cards
The Mellanox cards we have installed are the following...
No, the machines couldn't be left in that state .. I had to remove all IP addressing for 25Gb NICs and re-apply IP addresses to the 10Gb so that now only the 10Gb NICs on the Ceph nodes have addressing.
Also, as far as 'ip route' goes, it wouldn't show anything as there are no routes since this...
After applying the config, no, there were no errors what-so-ever .. that was the reason I treated the change over from 10 to 25 as successful
I understand the GUI is showing what is configured in /etc/network/interfaces .. the problem isn't what is in /etc/network/interfaces but rather, the...
What you are saying is incorrect, please re-reference the screenshots sent and here is a piece of that in a fresh screenshot focusing in on the IP addressing on the 25Gb NICs from the 'ip a' command
The 25Gb NICs ALL of them showed IP addresses and the GUI shows the IP addressing ONLY on the...
I don't think the last went through ... sorry .. trash Chome browser .. using Firefox this time
ens1f0 and ens1f1 are the 25Gb fiber NICs
ens5f0 and ens5f1 are the 10Gb copper NICs
So this issue is partially solved at this point ... now it has turned into something else
As I previously mentioned, on July 5th, 2021 we moved our Ceph servers over to 25Gb fiber instead of 10Gb copper .. at least that's what we thought.
I turns out it seems there is a bug in the GUI of...
"ceph auth ls" shows client.admin totally different from all nodes
client.admin
key: yada yada
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
on the client nodes none of those "caps:" entries are there .. just the "key:" entry followed by the actual key of course
At this point, it's looking like Ceph is blocking those clients .. there aren't any networking issues that I can see. I've verified iptables on the ceph nodes and all are empty and set to "ACCEPT" so no firewalling going on. Pings go through just fine from client node to Ceph monitor node...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.