I found a workaround, by editing the /etc/default/grub file (run update-grub :) ) and append to GRUB_CMDLINE_LINUX_DEFAULT the following:
pcie_port_pm=off pcie_aspm.policy=performance
I have a NUC13 with a i226v 2.5Gbit/sec NIC, running on 1Gbit/sec. Proxmox 8 is installed, with the latest kernel. My internet is 1Gbit/sec down/up. All testing is down directly on the Proxmox itself, no VM. On my other NUC8 the speed is always 1Gbit/sec down/up, whatever I do.
When I use scp...
No, not related. I tested via via USB passthrough and "usbip". With my Z-Wave I had too many drops/retries with passthrough and with "usbip" zero issues. That is why I moved fully to "usbip" (it is also more flexible).
I cancelled my original order, it continue to shift the date. I ordered on Monday from Mouser (EU) and it got shipped out on Monday and it arrived today :-) I will install it tonight in my second NUC13.
BTW - i did a bit more testing, but NUC8 TB3 <-> NUC13 TB4 is not really super reliable, so...
The only frustrating part is to automatic bind on start-up, it can be hacked into a service. I made another bash script which does it from, including status, logging, etc.
I uploaded it to my gist:
https://gist.github.com/ualex73/e6d6088120840a10e126d62fe4061079
I tried a build with Ceph, but did not get CephFS directly to work (gave up quickly, I know) ... but your steps are super helpful, so I will give it another try soon.
Correct, I am from the Netherlands. Good tip for mousser. If my current supplier cannot deliver it in a week orso, I will cancel and order there.
I am using "usbip" to connect the USB devices inside the Proxmox VMs, because USB device via the hardware is slow and not stable. Technically I can...
Yes, I found the 4000 MTU somewhere in the TB source code, hence the test around it.
I am almost happy with my setup, but I think will not build the Ceph storage in my end. I normally do not keep 3 nodes up-and-running, so there is no super benefit for me (my Home Assistant node is connected to...
@scyto I tested with MTU size on the thunderbolt, but it is better to configure 4000.
MTU 9000 gives:
Accepted connection from 10.0.0.2, port 32842
[ 5] local 10.0.0.3 port 5201 connected to 10.0.0.2 port 32852
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.30 GBytes...
I am also running "ANRPL357.0027.2023.0607.1754 ", I updated all firmware before I began with the rest. What type of issues did you notice with the ethernet board?
Good it works fine for you, so I have most likely a small typo somewhere and good you spotted the naming of the file. My original files were named 00-thunderboltX.link, but when I copied it ... I change it to 10/11, do not know why :) More testing is required on my side.
@scyto - question: how do you get around the en05/en06 naming, depending on who connects first?
So if I have connected:
nuc1 thunderbolt0 -> nuc2 thunderbolt1
Then on both I see them getting en05. Where I expect nuc1 get en05 and nuc2 get en06.
The dmesg output shows it sees thunderbolt0 on...
I believe you need to configure them as:
Node1 en05: 10.0.0.5
Node1 en06: 10.0.0.9
Node2 en05: 10.0.0.10
Node2 en06: 10.0.0.13
Node3 en05: 10.0.0.14
Node3 en05: 10.0.0.6
All /30 of course … 3 subnets, instead of 6 you have now.
@scyto I do not use a cluster YET, I am still in the process of getting the network stable.
I also tested with a vanilla Debian 12, and it shows the same problem as you had before in Proxmox 8. IPv4 works perfectly, IPv6 does not work (only ping works). Seems IPv6 in the thunderbolt-net is just...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.