Search results

  1. D

    PVE7 - Ceph 16.2.5 - Pools and number of PG

    Which driver do you use for the Mellanox Connect-X 5 Cards to get them running under debian bullseye (11). I try to update from 6.4.x to 7.x but the Connect-X 6 cards won't work. Did you run the cards in eth mode?
  2. D

    [SOLVED] Upgrade to Proxmox 7 - Bond (LACP) Interface not working anymore

    After installing ifupdown2 everything works fine.
  3. D

    [SOLVED] Upgrade to Proxmox 7 - Bond (LACP) Interface not working anymore

    Same Problem here after upgrade 6 to 7 (3x Bonds LCAP 803.ad). No ifupdown2 installed, but net-tools (ifconfig). But I have no luck with: systemctl enable networking; reboot Disable auto ensxx fixed the issue, so i have to investigate...
  4. D

    Mellanox MCX653106A-ECAT Support

    Fixerd :) Firewall Rule permits local lan. After cunfiguration of datacenter firewall rule everything works fine. But I switched to ethernet-mode with ROCE. And it runs very smooth.
  5. D

    Mellanox MCX653106A-ECAT Support

    I created for each node and port an opensm.conf file: pve-01 /etc/opensm/opensm.ib0.conf guid {{PortGuid0}} daemon TRUE log_file /var/log/opensm.ib0.log dump_files_dir /var/log/opensm/ib0 pve-01 /etc/opensm/opensm.ib1.conf guid {{PortGuid0}} daemon TRUE log_file /var/log/opensm.ib1.log...
  6. D

    3 Node switchless Infiniband Setup with mellanox

    Hello, I have installed dual-port connectx-6 cards (Mellanox) in each node (3 in cluster). Each node is connected via meshup to each node per DAC copper cable. I think I miss something with the opensm configuration. Does anyone have a working configuration for this setup with iboip and can post...
  7. D

    Mellanox MCX653106A-ECAT Support

    I try that, but same result... But if I start opensm -g {Port-Guid} --daemon the interfaces show the flag running and route and also ip route shows that the links are up, but I cannot ping the hosts...
  8. D

    Mellanox MCX653106A-ECAT Support

    So I just recognize that the flag running is missing of ib0/ib1 ib0: flags=4099<UP,BROADCAST,MULTICAST> mtu 65520 inet 10.10.20.3 netmask 255.255.255.0 broadcast 10.10.20.255 unspec 80-00-02-46-FE-80-00-00-00-00-00-00-00-00-00-00 txqueuelen 256 (UNSPEC) RX packets 0...
  9. D

    Mellanox MCX653106A-ECAT Support

    I just get mode and mtu working without any issues, but ip route shows link down. To use mode connected and mtu do following: 1.Disable ipoib_enhanced in /etc/modprobe.d/ib_ipoib.conf: options .... ipoib_enhanced=0 .... 2.Restart openibd service: service openibd restart /etc/init.d/openibd...
  10. D

    Mellanox MCX653106A-ECAT Support

    I installed the cards on each host with that steps: 1. check if mellanox is present: lspci | grep Mellanox 2. install pve-headers: aptitude install pve-headers 3. reboot system reboot 4. create mellanox repo: cd /etc/apt/sources.list.d/ wget...
  11. D

    Mellanox MCX653106A-ECAT Support

    if I call ibhosts with port 0 and port 1. What should be expected? I get: Host 1 ibhosts -P 0 Ca : 0xb8cef603005d458f ports 1 "pve-03 HCA-2" Ca : 0xb8cef603005d403e ports 1 "pve-01 HCA-1" Host 1 ibhosts -P 1 Ca : 0xb8cef603005d458f ports 1 "pve-03 HCA-2" Ca ...
  12. D

    Mellanox MCX653106A-ECAT Support

    Hello, this is not a stupid question, I'll check that with ibstatus if links are up if I reboot a server. While the server is rebooting, the Link is shown as unplugged/down. so I think the connection should work/cables are plugge in, ibstatus output: Infiniband device 'mlx5_0' port 1 status...
  13. D

    Mellanox MCX653106A-ECAT Support

    Hello, I try to bring up that interfaces but if I follow the guidelines for routed Meshup and infiniband mode, I cannot set the comment out lines Node1: auto ib0 iface ib0 inet static address 10.10.20.1/24 pre-up modprobe ib_ipoib # pre-up echo connected > /sys/class/net/ib0/mode #...
  14. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    As I promised, here the solution (Works with connect-x6 card ECAT): 1. check if mellanox is present: lspci | grep Mellanox 2. install pve-headers: aptitude install pve-headers 3. reboot system reboot 4. create mellanox repo: cd /etc/apt/sources.list.d/ wget...
  15. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    I‘ll try it with mellanox repo this week and the over-headers. Thanks for the link. I‘ll need the connect-6 cards for ceph storage, cause the 10Gbr are limited the throughput of the 24 Nvmes. I‘ll post my solution.
  16. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    i try following: Add mellanox repository cd /etc/apt/sources.list.d/ wget https://linux.mellanox.com/public/repo/mlnx_ofed/latest/debian10.5/mellanox_mlnx_ofed.list wget -qO - https://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox | sudo apt-key add - apt-get remove libipathverbs1...
  17. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    Can you provide the dependencies? Did you install make? Kind regards, Daniel
  18. D

    Mellanox MCX653106A-ECAT Support

    Thanks a lot for answering the questions. We boght 3 Cards for each server and give mellanox cards a try :-)
  19. D

    Mellanox MCX653106A-ECAT Support

    Hello, we have 3 nodes that uses 24 NVmes (8 drives per node) with Ceph and bonded 2x Intel 10GBe Adapters and we plan to buy the Mellanox MCX653106A-ECAT-SP (Connected as Meshup wit DAC cables for 200 Gbe). - Are these cards supported by Proxmox with debian MLNX_OFED driver? - So are there...
  20. D

    Ceph stop working after reboot of one node!

    Done, I reinstall all nodes this night and restore backups. Marked as done :-)