Search results

  1. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    You tested with a blocksize of 4M so the iops are okay with that huge bs. Try it again with 4K
  2. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    With all cables attached (direct test) I reach the full 26 Gbit/s
  3. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Here's an iperf3 test with a down link between pve01 and pve02, so the traffic is routed over pve03
  4. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    I use BTRFS as I can also put VMs there and use snapshots in the GUI.
  5. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Maybe calling the scripts as post-up in the /etc/network/interfaces file would fit the most. But I'm currently still struggling with interfaces wont come up on node reboot until i manually enter "ifup en0[56]" on the other (attached) host.
  6. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Wow, thank you for finding this @anaxagoras @scyto you should definetly have a lookt at this and add this to your github gist, this instantly bumps my iperf3 tests to solid 26Gbit/s with very low retransmits even with my small i3! I created a rc.local to do so: #!/bin/bash for id in $(grep...
  7. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Today I faced an outage again ... following the discussions here, I'm going to invest in new Thunderbolt cables. currently just hitting around 20Gbit/s with lots of retries.
  8. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Currently running on 6.5.13-1-pve. They only occurred twice so far, I installed the cluster around October last year. Hope it was just a kernel bug in the version used ^^
  9. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Just another hint ... My en0x interfaces sporadically didn't came up every time after a reboot. Changing auto en0x in the interfaces file to allow-hotplug en0x did the trick there. Running reef on ipv6 for a while now, but already had two cluster freezes where I had do completely cut the power...
  10. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Good point tbh :D One osd per node would mean 1 E Core for the osd, and the other for the VM's if they need it. Maybe I will give that a try (even if the performance is already a way more then I expected and therefor already more than okay :D), but for now I need to test the stability and...
  11. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    I'm also kinda surprised ^^ I also installed a Transcend MTS430S SSD for the OS itself, so that I can use the pm893's for ceph only. Sadly I didn't looked at the cpu usage during the tests, but I attached some screenshots from the monitoring during the test. Edit: I also attached a screenshot...
  12. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    So here are some performance tests from my ceph cluster. I used the i3 NUC13 (NUC13ANHi3) in a three node full mesh setup, with the thunderbolt4 net as ceph network. All tests are performed in a VM with 4 cores and 4GB of RAM. The Ceph is built with one Samsung PM893 Datacenter SSD (3,84TB)...
  13. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    How's your performance when you scp a >50GB File over the thunderbolt link? mine starts at aroung 750MB/s and drops after around 7-10GB and is constantly stalling. Edit: also found this by myself ... its the crappy OS SSD (Transcend MTS430S) Tested again with a Samsung OEM Datacenter SSD...
  14. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    So I found the difference .... for several pass-through tests I set intel_iommu=on. For any reason, this lowers the thunderbolt-net throughput. I reverted this and now have stable ~21Gbit over the thunderbolt net in all directions with any node.
  15. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Yes, every node is directly connected with every node. I also thought about a faulty cable at first, but both other hosts (pve02 & pve03) are showing the same results when pve01 is sending, and they are connected to different ports on pve01. pve02 <-> pve03 do not have any problems at all btw.
  16. R

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    Hi everyone, thank you @scyto for your work to bring this to where it is now! Inspired by your github writeup I bought three NUC13 and successfully built my new 3 node homelab ceph cluster. I have one connection (pve01 outgoing) which only reaches ~14Gbit, pve01 incomming and all the other...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!