Maybe calling the scripts as post-up in the /etc/network/interfaces file would fit the most.
But I'm currently still struggling with interfaces wont come up on node reboot until i manually enter "ifup en0[56]" on the other (attached) host.
Wow, thank you for finding this @anaxagoras
@scyto you should definetly have a lookt at this and add this to your github gist, this instantly bumps my iperf3 tests to solid 26Gbit/s with very low retransmits even with my small i3!
I created a rc.local to do so:
#!/bin/bash
for id in $(grep...
Today I faced an outage again ...
following the discussions here, I'm going to invest in new Thunderbolt cables.
currently just hitting around 20Gbit/s with lots of retries.
Currently running on 6.5.13-1-pve.
They only occurred twice so far, I installed the cluster around October last year.
Hope it was just a kernel bug in the version used ^^
Just another hint ... My en0x interfaces sporadically didn't came up every time after a reboot.
Changing auto en0x in the interfaces file to allow-hotplug en0x did the trick there.
Running reef on ipv6 for a while now, but already had two cluster freezes where I had do completely cut the power...
Good point tbh :D
One osd per node would mean 1 E Core for the osd, and the other for the VM's if they need it.
Maybe I will give that a try (even if the performance is already a way more then I expected and therefor already more than okay :D), but for now I need to test the stability and...
I'm also kinda surprised ^^
I also installed a Transcend MTS430S SSD for the OS itself, so that I can use the pm893's for ceph only.
Sadly I didn't looked at the cpu usage during the tests, but I attached some screenshots from the monitoring during the test.
Edit: I also attached a screenshot...
So here are some performance tests from my ceph cluster.
I used the i3 NUC13 (NUC13ANHi3) in a three node full mesh setup, with the thunderbolt4 net as ceph network.
All tests are performed in a VM with 4 cores and 4GB of RAM.
The Ceph is built with one Samsung PM893 Datacenter SSD (3,84TB)...
How's your performance when you scp a >50GB File over the thunderbolt link?
mine starts at aroung 750MB/s and drops after around 7-10GB and is constantly stalling.
Edit:
also found this by myself ... its the crappy OS SSD (Transcend MTS430S)
Tested again with a Samsung OEM Datacenter SSD...
So I found the difference ....
for several pass-through tests I set intel_iommu=on.
For any reason, this lowers the thunderbolt-net throughput.
I reverted this and now have stable ~21Gbit over the thunderbolt net in all directions with any node.
Yes, every node is directly connected with every node.
I also thought about a faulty cable at first, but both other hosts (pve02 & pve03) are showing the same results when pve01 is sending,
and they are connected to different ports on pve01.
pve02 <-> pve03 do not have any problems at all btw.
Hi everyone,
thank you @scyto for your work to bring this to where it is now!
Inspired by your github writeup I bought three NUC13 and successfully built my new 3 node homelab ceph cluster.
I have one connection (pve01 outgoing) which only reaches ~14Gbit, pve01 incomming and all the other...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.