Search results

  1. P

    [SOLVED] Slow 40GBit Infiniband on Proxmox 8.1.4

    Thank you! modprobe: FATAL: Module rdma not found in directory /lib/modules/6.8.12-8-pve My cards are set to ethernet mode right now. Would that explain this error message (because in ethernet mode rdma might not work)?
  2. P

    Looking for a "cheap" way to upgrade to 40gibt

    # ibstat CA 'rocep101s0' CA type: MT4099 Number of ports: 2 Firmware version: 2.36.5000 Hardware version: 1 Node GUID: 0x248a07030078bb90 System image GUID: 0x248a07030078bb93 Port 1: State: Active Physical...
  3. P

    Looking for a "cheap" way to upgrade to 40gibt

    Thanks for the hint! I don't think my cards are limited in that way because I have them in eth mode and do reach throughput of 15 to 20 gbps that way.
  4. P

    [SOLVED] Slow 40GBit Infiniband on Proxmox 8.1.4

    So I got that Ethernet license as well and switched two of my three cards to Ethernet mode (and, of course, also the switch). With iperf I got a throughput of (close to) 30gbps. Not bad! Or so I thought. I switched also the last card to Ethernet mode and ... have never been able to reproduce the...
  5. P

    [SOLVED] Slow 40GBit Infiniband on Proxmox 8.1.4

    I haven't given up yet. And I did find out something: So the default (group) rate for an IB subnet (at least as provided on my SX6036) seems to be 10gbps (that's what the ipdiagnet output suggests but I also found some link to corroborate where it said that the default setting of "3" for the...
  6. P

    [SOLVED] Slow 40GBit Infiniband on Proxmox 8.1.4

    Great find! I'm going to try that as well.
  7. P

    Looking for a "cheap" way to upgrade to 40gibt

    I have two potential explanations for the low throughout of my 40/56gbps IPoIB: - There might be a bottleneck in the PCIe slots that my IB cards sit in (I have relatively small servers with only few PCI lanes and the available slots may have too few lanes to fully saturate the connections) -...
  8. P

    Looking for a "cheap" way to upgrade to 40gibt

    Yeah, well, I already have the expensive switch, so... And while the IB switch has way more ports than my 10gbe switch, I actually do need fewer of them because my IB cards only have two ports whereas my 10gbe cards have four. And the whole purpose of the exercise was to move data faster...
  9. P

    [SOLVED] Slow 40GBit Infiniband on Proxmox 8.1.4

    How/where did you buy the license? I thought, you can't buy that anymore because the switch is EOL...
  10. P

    Looking for a "cheap" way to upgrade to 40gibt

    Okay, so I got all three nodes to talk to each other over the new infiniband network. And my switch reports that all nodes are connected via 56gbps links (although I didn't purchase the more expensive cables :D) So that's a win. But the performance is underwhelming to say the least: 10gbps -...
  11. P

    Looking for a "cheap" way to upgrade to 40gibt

    Ah, found this: https://pve.proxmox.com/wiki/Infiniband And got the VPI card show up as an ethernet interface (while it is still being recognized by the switch). Getting closer...
  12. P

    Looking for a "cheap" way to upgrade to 40gibt

    Took me a while to get the new cards and then a couple of other things diverted my focus. But I have been gradually replacing my nodes and each new node got one of the new cards and now all nodes are equipped with true infiniband cards. One card it set to link type VPI and the other two are set...
  13. P

    CEPH rebalancing soooo slooooowwwww

    Too bad this never got resolved because I am again in the same situation: I have added a new node and at the beginning, rebalancing/backfilling was fast (probably because it was rebalancing/backfilling the new SSD) but after a while it settled at between 10 and 20 MiB/s (rebalancing/backfilling...
  14. P

    Calling on the hive mind: networking issue from hell

    Yes: # ping 192.168.252.230 PING 192.168.252.230 (192.168.252.230) 56(84) bytes of data. From 192.168.252.237 icmp_seq=1 Destination Host Unreachable From 192.168.252.237 icmp_seq=2 Destination Host Unreachable From 192.168.252.237 icmp_seq=3 Destination Host Unreachable From 192.168.252.237...
  15. P

    Calling on the hive mind: networking issue from hell

    pveversion -v proxmox-ve: 8.3.0 (running kernel: 6.8.12-7-pve) pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6) proxmox-kernel-helper: 8.1.0 proxmox-kernel-6.8: 6.8.12-7 proxmox-kernel-6.8.12-7-pve-signed: 6.8.12-7 proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5...
  16. P

    Rootless Docker inside unprivileged LXC container

    So are you saying you got rootless docker working in an unprivileged LXC? Can you compare performance to rootful docker or a privileged LXC? Is it much slower?
  17. P

    Rootless Docker inside unprivileged LXC container

    So, you're saying this isn't necessary. And that does sound plausible. But other than doing the same isolation twice, is there a particular downside to it? Like slower operation? What would you recommend doing instead? Running rootful docker in an unprivileged LXC or running rootless docker in a...
  18. P

    [SOLVED] Don't include cd-roms

    Hmm, I have a Samba share on which the problematic ISO is sitting. The same share is available on the target node as well. The names are identical. And yet, it errors out. I believe that in my situation (at least), it should not error out but transfer the VM. And I agree that there should be...
  19. P

    [SOLVED] Alert: api error (status 400 = Bad request): api error (status = 501 Not implemented) während des Migrationsversuchs von Node zu Node

    First of all, thank you for this new project. I have been waiting for the ability to migrate VMs out of my cluster for a long time and now it seems to be within my grasp. BTW: I was expecting this functionality to be implemented directly into PVE but your implementation is probably better...
  20. P

    Calling on the hive mind: networking issue from hell

    Good point, I have updated my OP with comments on this.