Search results

  1. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Hi, I add a bridge to use it as switch. I use Cisco SFP28 cables with the mikrotic and the mellanox cards . So I get instantly 25GBit links also on breakout cables.
  2. D

    Docker on LXC container faster than on VM

    Hello, we ran docker on LXC container and vm on a proxmox 8.x.x three node cluster with NVME ceph storage (24 NVMEs) on Dell R740XD servers. Docker runs on Debian booworkm latest version. Hypervisor nesting is activated for LXC and VM. We put our monitoring in a docker container in LXC and VM...
  3. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Solved, works now with 24W instead of 140W.
  4. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Hi Falk, I got everything together. I connect the 100GBe breakout with one Mellanoc ConnectX-4 LX Microtik and Mellanox show link established (Mellanox Link LED lights green) but the activity LED of the mellanox card blinked cyclic green and I get not traffic over the link. I see the card with...
  5. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Perfect, thanlks a lot, so I'll go with the Mikrotik CRS510-8XS-2XQ-IN and the Mellanox Connectx-4 SFP28 25GBe Cards to reduce power consumption. So I will use 2 passive DAC 100GBe QSFP28 to 4x 25GBe SFP28 to connect the servers and 4 1GBe Transceivers to connect legacy stuff on SFP28 Ports. I...
  6. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Yes of course routerOs looks like a bit different from calssic web uis of switches, but it is ok to manage things and cli looks nice. So I think I would go with the MikroTik at all. The only thing is DAC breakthrough port configuration, for example bonding(LCAP). So if you please be so kind to...
  7. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    So thanks for the hint with routerOS and VM, I bought Mellanox Connectx-4 MXC422A cards (dual port SFP+ 25Gbe) as daughterboards for the Dell Servers. It was a decision of price 15.- EUR/Card is better than 69.- EUR for one broadcom. So now I check the MikroTik RouterOS 7.1.2. fs.com is ou of...
  8. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Yes we have 3 Dell R740XD srever. Ceph is connected via 100Gbe Mellanox Connectx-6 dualport in routed network without a switch, that's fine. All 3 nodes have quad port Intel X550 10Gbe nics as daughterboard. 1 Bond for WAN (plugged into the switch) and one router network for migration. We have a...
  9. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Thanks a lot.so would you have 4ports with the dac breakout on the 100GBe port? How much is the power consumption of that MikroTik switch?
  10. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Can you confirm that a Broadcom with 57414 Chipset works well? So we decided to use SFP28 with a new Mircotik Switch instead of using SFP+. Otherwise I use the Mellanox Connectx-5 CX512F Dual Port SFP28. Background is that our Netgear switch based on 10GBe Base-T consumes to much power (round...
  11. D

    SFP+ Quadport Cards Broadcom, QLogic or Intel

    Hello, we plan to upgrade the network from 10GBe Base-T to 10/25GBe SFP+/SFP28. So in the past I still prefer intel cause the are stable an run out of the box. But the Dell R740XD Server offers as QuadPort daughter board only Broadcom, QLogic and Intel, I read there are compatibility issues...
  12. D

    Ceph Slow Ops if one node is rebooting (Proxmox 7.0-14 Ceph 16.2.6)

    Hello, just use the same config as described here https://pve.proxmox.com/wiki/Network_Configuration#_routed_configuration Just change the interfaces and ips to yours. Kind regards, Daniel
  13. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    Are you sure that the mode is set to Ethernet for your card?
  14. D

    Update PVE 6 to 7 with Installes Mellanox Connectx-6 Drivers DKMS Ceph not working

    Hello, Just take a look at this thread, I documented there. As I remember replace the apt repo with Mellanox Ubuntu’s one.
  15. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    So if you do not have installed proxmox 7.x and you have a working cluster, then remove a node from the cluster (Cluster Manager Tutorial) and install proxmox 7.x. Proxmox 7.x will provide the default driver for mellanoc (mlnx4/mlnx5), so you do not have to install additional things. After you...
  16. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    As I told you I‘m fine with the network performance and latency. So I did not try to enable RoCev2 with default kernel driver.
  17. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    So I checked for buildin drivers, RoCev2 is not enabled, but the default driver performs for my setup better than the original Mellanox drivers. Latency is below 0.035ms and throughput roudabout 96GBps. I think that is ok. I have no issues with the virtual machines and ceph, no freezing, no...
  18. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    And use routed network instead of broadcast. I had a lot of trouble with broadcast for froh if a node goes down for update or maintenance. The ceph storage freezes with slow ops.
  19. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    Hello, I used the Proxmox 7 default driver instead of original Mellanox driver, cause the buildin drivers performs better in my case (switchless 3 node cluster, Ethernet mode). Latency is nearly the same between both driver versions. Install the latest firmware of Ubuntu 21.xxx on the cards.
  20. D

    [SOLVED] pve 6.3 vs. mellanox ofed

    So as I remember connectx3 cards are not supported. Can you post the log file /tmp/mlnx_fw_update.log?[/CODE]