Search results

  1. P

    Looking for a "cheap" way to upgrade to 40gibt

    Yeah, well, I already have the expensive switch, so... And while the IB switch has way more ports than my 10gbe switch, I actually do need fewer of them because my IB cards only have two ports whereas my 10gbe cards have four. And the whole purpose of the exercise was to move data faster...
  2. P

    [SOLVED] Slow 40GBit Infiniband on Proxmox 8.1.4

    How/where did you buy the license? I thought, you can't buy that anymore because the switch is EOL...
  3. P

    Looking for a "cheap" way to upgrade to 40gibt

    Okay, so I got all three nodes to talk to each other over the new infiniband network. And my switch reports that all nodes are connected via 56gbps links (although I didn't purchase the more expensive cables :D) So that's a win. But the performance is underwhelming to say the least: 10gbps -...
  4. P

    Looking for a "cheap" way to upgrade to 40gibt

    Ah, found this: https://pve.proxmox.com/wiki/Infiniband And got the VPI card show up as an ethernet interface (while it is still being recognized by the switch). Getting closer...
  5. P

    Looking for a "cheap" way to upgrade to 40gibt

    Took me a while to get the new cards and then a couple of other things diverted my focus. But I have been gradually replacing my nodes and each new node got one of the new cards and now all nodes are equipped with true infiniband cards. One card it set to link type VPI and the other two are set...
  6. P

    CEPH rebalancing soooo slooooowwwww

    Too bad this never got resolved because I am again in the same situation: I have added a new node and at the beginning, rebalancing/backfilling was fast (probably because it was rebalancing/backfilling the new SSD) but after a while it settled at between 10 and 20 MiB/s (rebalancing/backfilling...
  7. P

    Calling on the hive mind: networking issue from hell

    Yes: # ping 192.168.252.230 PING 192.168.252.230 (192.168.252.230) 56(84) bytes of data. From 192.168.252.237 icmp_seq=1 Destination Host Unreachable From 192.168.252.237 icmp_seq=2 Destination Host Unreachable From 192.168.252.237 icmp_seq=3 Destination Host Unreachable From 192.168.252.237...
  8. P

    Calling on the hive mind: networking issue from hell

    pveversion -v proxmox-ve: 8.3.0 (running kernel: 6.8.12-7-pve) pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6) proxmox-kernel-helper: 8.1.0 proxmox-kernel-6.8: 6.8.12-7 proxmox-kernel-6.8.12-7-pve-signed: 6.8.12-7 proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5...
  9. P

    Rootless Docker inside unprivileged LXC container

    So are you saying you got rootless docker working in an unprivileged LXC? Can you compare performance to rootful docker or a privileged LXC? Is it much slower?
  10. P

    Rootless Docker inside unprivileged LXC container

    So, you're saying this isn't necessary. And that does sound plausible. But other than doing the same isolation twice, is there a particular downside to it? Like slower operation? What would you recommend doing instead? Running rootful docker in an unprivileged LXC or running rootless docker in a...
  11. P

    [SOLVED] Don't include cd-roms

    Hmm, I have a Samba share on which the problematic ISO is sitting. The same share is available on the target node as well. The names are identical. And yet, it errors out. I believe that in my situation (at least), it should not error out but transfer the VM. And I agree that there should be...
  12. P

    [SOLVED] Alert: api error (status 400 = Bad request): api error (status = 501 Not implemented) während des Migrationsversuchs von Node zu Node

    First of all, thank you for this new project. I have been waiting for the ability to migrate VMs out of my cluster for a long time and now it seems to be within my grasp. BTW: I was expecting this functionality to be implemented directly into PVE but your implementation is probably better...
  13. P

    Calling on the hive mind: networking issue from hell

    Good point, I have updated my OP with comments on this.
  14. P

    Calling on the hive mind: networking issue from hell

    Hi, so I have a small home lab cluster of 3 PVE nodes and one additional PBS. And I have the following networks (each on separate NICs/cables/switches): - Management - Corosync - Ceph - Backup The PVE nodes are connected to all networks. The PBS in only connected to Management and Backup...
  15. P

    Search backup not for date but for file

    Hi, first of all: Thank you very much for PBS (and PVE). It is such a great piece of software. Currently, I can search the backups on PBS by selecting the VM and then an individual backup by date. There, I can dive into the file system and retrieve a file. Sometimes, I would like to first...
  16. P

    Off network PBS best practice?

    Seeing that there doesn't seem to be a solution where there is only one PBS that doesn't sit on the same network as the PVE (safe piercing a hole in the firewall that ought to separate the networks), I would like to understand whether the following would be possible: There is one small PBS on...
  17. P

    Off network PBS best practice?

    Yeah, that's what I have currently, but I was wondering whether there might be another setup that doesn't require two separate PBSs.
  18. P

    NIC keeps changing interfaces on reboots

    For me, it has been working since I adopted names which the systems wouldn't use (like lan0 and lan1).
  19. P

    Off network PBS best practice?

    So I have been running a PBS to complement my home lab PVE cluster for a while now and everything works beautifully. And yet I am not fully happy because if my (management) network ever were breached and my VMs compromised, the next steps for an attacker would be to go after my backups. And if...
  20. P

    Help configuring vGPU?

    I must have, because I got vGPU working after a while. But I don't remember the exact steps unfortunately. But I gave up on using vGPU because - it was flaky, hit and miss - the concept of dividing my card into virtual GPUs isn't actually right for my use case because I keep experimenting and...