Search results

  1. CTCcloud

    Mellanox Network Cards on Proxmox 7.0

    I just got answer back from Mellanox that DMA on their cards is set via the kernel/CPU so nothing to do manually apparently. When we upgraded the firmward for 12 NICs all of them complained that DMA wasn't enabled. We are currently scratching our heads .. that's why I started this thread. I'm...
  2. CTCcloud

    Mellanox Network Cards on Proxmox 7.0

    Ok, because we swapped out 10Gb Base-T cards for these 25Gb and are seeing lower performance. Everything was much snappier with the 10 Gb and our Ceph is running entirely on NVMe drives which should be able to nearly saturate the 25Gb link if not entirely. In trying to find the answer, we can't...
  3. CTCcloud

    Mellanox Network Cards on Proxmox 7.0

    No, RDMA is "Remote Direct Memory Access" .. talking about standard DMA on PCIe bus where card can read/write direct to memory. While doing firmware upgrades all Mellanox cards read not having DMA enabled Also, since the OFED drivers don't yet exist for Debian 11, is there anything else to be...
  4. CTCcloud

    Mellanox Network Cards on Proxmox 7.0

    Since Mellanox doesn't yet support Debian 11, what are folks doing to get maximum performance out of their Mellanox fiber NICs? Also, does anyone know how to turn Direct Memory Access on for Mellanox Card model: CX4121A? This is a 25Gb SFP28 dual port card. Thanks in advance
  5. CTCcloud

    Hardware Support

    By the way, I did indeed do the firmware update on all cards installed .. on one server we removed the Mellanox card and installed an Intel card for testing purposes but otherwise, all other servers have had the Mellanox firmware updated and the server was rebooted just yesterday 8/17/2021 The...
  6. CTCcloud

    Hardware Support

    Please don't get me wrong .. I'm not trying to say Aaron is wrong .. just trying to get confirmation of solid hardware compatibility We've spent months and countless hours trying to get Ceph switched over to 25Gb fiber vs 10Gb copper to no avail ... so was trying to make 100% sure that the...
  7. CTCcloud

    Hardware Support

    That's not exactly what I asked, but thank you Aaron We had a network pro test the Mellanox cards on Debian Linux and they worked well .. once installed in Proxmox, they did not. That doesn't sound like a need for a firmware upgrade. We also know that Proxmox switched to using the Ubuntu kernel...
  8. CTCcloud

    Hardware Support

    **** BUMP ****
  9. CTCcloud

    Hardware Support

    We have been grappling with getting our Hyperconverged Ceph cluster moved over from 10Gb copper to 25Gb optical fiber for months now. We have been entirely unsuccessfully to this point. Is there a hardware compatibility list to find out about whether the Network cards for 25Gb we are trying to...
  10. CTCcloud

    [SOLVED] ceph storage not available to a node

    By the way, the Intel NICs we installed to test with are these Intel XXV710 Dual Port 25GbE SFP28/SFP+ PCIe Adapter Are these fully supported on Proxmox 6.4 and 7.0? at full 25Gb speeds and functionality? I ask because if it turns out we have to remove the Mellanox cards from all servers and...
  11. CTCcloud

    [SOLVED] ceph storage not available to a node

    Ok .. an update on this .. we just ordered Intel 25Gb cards (two for testing) and installed them in a couple of the servers I was able to assign IPs and copy large files back and forth. I have been unable to do this with the Mellanox cards The Mellanox cards we have installed are the following...
  12. CTCcloud

    [SOLVED] ceph storage not available to a node

    No, the machines couldn't be left in that state .. I had to remove all IP addressing for 25Gb NICs and re-apply IP addresses to the 10Gb so that now only the 10Gb NICs on the Ceph nodes have addressing. Also, as far as 'ip route' goes, it wouldn't show anything as there are no routes since this...
  13. CTCcloud

    [SOLVED] ceph storage not available to a node

    After applying the config, no, there were no errors what-so-ever .. that was the reason I treated the change over from 10 to 25 as successful I understand the GUI is showing what is configured in /etc/network/interfaces .. the problem isn't what is in /etc/network/interfaces but rather, the...
  14. CTCcloud

    [SOLVED] ceph storage not available to a node

    What you are saying is incorrect, please re-reference the screenshots sent and here is a piece of that in a fresh screenshot focusing in on the IP addressing on the 25Gb NICs from the 'ip a' command The 25Gb NICs ALL of them showed IP addresses and the GUI shows the IP addressing ONLY on the...
  15. CTCcloud

    [SOLVED] ceph storage not available to a node

    Nothing new on this? We had a problem on Saturday morning from these networking issues caused the GUI telling a different story than the reality ...
  16. CTCcloud

    [SOLVED] ceph storage not available to a node

    I don't think the last went through ... sorry .. trash Chome browser .. using Firefox this time ens1f0 and ens1f1 are the 25Gb fiber NICs ens5f0 and ens5f1 are the 10Gb copper NICs
  17. CTCcloud

    [SOLVED] ceph storage not available to a node

    So this issue is partially solved at this point ... now it has turned into something else As I previously mentioned, on July 5th, 2021 we moved our Ceph servers over to 25Gb fiber instead of 10Gb copper .. at least that's what we thought. I turns out it seems there is a bug in the GUI of...
  18. CTCcloud

    [SOLVED] ceph storage not available to a node

    "ceph auth ls" shows client.admin totally different from all nodes client.admin key: yada yada caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * on the client nodes none of those "caps:" entries are there .. just the "key:" entry followed by the actual key of course
  19. CTCcloud

    [SOLVED] ceph storage not available to a node

    At this point, it's looking like Ceph is blocking those clients .. there aren't any networking issues that I can see. I've verified iptables on the ceph nodes and all are empty and set to "ACCEPT" so no firewalling going on. Pings go through just fine from client node to Ceph monitor node...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!