Recent content by rene.bayer

  1. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    I'm running with a shared cluster_network and public_network on the TB link. For external access i use direct host routes to the PVE hosts over IPv6. Here's an example: I had to disable multicast_snooping on the Proxmox vmbr0 interface to have IPv6 NDP, DaD and so on IPv6 itself to work as...
  2. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    You tested with a blocksize of 4M so the iops are okay with that huge bs. Try it again with 4K
  3. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    With all cables attached (direct test) I reach the full 26 Gbit/s
  4. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Here's an iperf3 test with a down link between pve01 and pve02, so the traffic is routed over pve03
  5. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    I use BTRFS as I can also put VMs there and use snapshots in the GUI.
  6. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Maybe calling the scripts as post-up in the /etc/network/interfaces file would fit the most. But I'm currently still struggling with interfaces wont come up on node reboot until i manually enter "ifup en0[56]" on the other (attached) host.
  7. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Wow, thank you for finding this @anaxagoras @scyto you should definetly have a lookt at this and add this to your github gist, this instantly bumps my iperf3 tests to solid 26Gbit/s with very low retransmits even with my small i3! I created a rc.local to do so: #!/bin/bash for id in $(grep...
  8. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Today I faced an outage again ... following the discussions here, I'm going to invest in new Thunderbolt cables. currently just hitting around 20Gbit/s with lots of retries.
  9. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Currently running on 6.5.13-1-pve. They only occurred twice so far, I installed the cluster around October last year. Hope it was just a kernel bug in the version used ^^
  10. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Just another hint ... My en0x interfaces sporadically didn't came up every time after a reboot. Changing auto en0x in the interfaces file to allow-hotplug en0x did the trick there. Running reef on ipv6 for a while now, but already had two cluster freezes where I had do completely cut the power...
  11. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Good point tbh :D One osd per node would mean 1 E Core for the osd, and the other for the VM's if they need it. Maybe I will give that a try (even if the performance is already a way more then I expected and therefor already more than okay :D), but for now I need to test the stability and...
  12. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    I'm also kinda surprised ^^ I also installed a Transcend MTS430S SSD for the OS itself, so that I can use the pm893's for ceph only. Sadly I didn't looked at the cpu usage during the tests, but I attached some screenshots from the monitoring during the test. Edit: I also attached a screenshot...