Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

l-koeln

New Member
Jul 24, 2023
2
1
3
Hello,

I do have a problem with my Proxmox intel Nuc Cluster.

It contains 3 Intel Nuc 13 Pro and uses its 2 Thunderbolt Ports to form a ring networt for ceph communication.
Proxmox/Ceph – Full Mesh HCI Cluster w/ Dynamic Routing

To handle the Network routing, I am using FRR and ospf.

To enable the Thunderboltports as a network device, I loaded the kernel modules "thunderbolt" and "thunderbolt-net" in "/etc/modules"
and renamed the interfaces names from "thunderbolt0" / "thunderbolt1" to "en05" / "en06" to make them visible in the Proxmox GUI.
Connect 2 hosts via thunderbolt 3
To do so, I created 2 files in "/etc/systemd/network/", "10-thunderbolt0.link" and "10-thunderbolt1.link".

example: 10-thunderbolt0.link
Code:
[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
MACAddress=02:89:12:b5:35:cf
Name=en05

In general it works! The Network has 10Gig speed, writing to the nvme ceph storage is super fast, and moving am VM to an other node works great.
BUT!

Problem:

Every time I reboot a node (all show the same behavior), the network interfaces en05 and en06 (aka Thunderbolt ports) are down.
Code:
5: en05: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 02:bf:2f:cf:19:a1 brd ff:ff:ff:ff:ff:ff
6: en06: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 02:ae:57:53:99:83 brd ff:ff:ff:ff:ff:ff

Oke, i tought "ip link set en05 up" and ""ip link set en06 up" at startup solves the problem but no.

Code:
5: en05: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 02:bf:2f:cf:19:a1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::bf:2fff:fecf:19a1/64 scope link
       valid_lft forever preferred_lft forever
6: en06: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 02:ae:57:53:99:83 brd ff:ff:ff:ff:ff:ff

turns out, i matters in wich order you enable the connection on each node:
Code:
does work!
on Nuc1:                       ------->               on Nuc2:
ip link set en05 up                                ip link set en06 up

does not work
on Nuc1:                        ------->               on Nuc2:
ip link set en06 up                                 ip link set en05 up

the behavior is the same for every node!
But thats not all, to actually establish a tcp connection or ping an other node,
after every reboot I have to change the ip adress of en05 / en06 and restart the networking service (and change it back to its original value).
Code:
systemctl restart networking.service
After that it works just fine and as intendet.
I dont know what causes this, I found this action (by accident) to "solve" the Problem.

Does anybody know why I have to reassign IP adresses to communicate over "en05" and "en06"?


Dokumentation

Proxmox version: 8.0.3
Frr version: 8.4.2


1690207347087.png

Code:
#FRR config, router ID different on each device

ip forwarding
!
router ospf
 ospf router-id 0.0.0.1
 log-adjacency-changes
 exit
!
interface lo
 ip ospf area 0
 exit
!
interface en05
 ip ospf area 0
 ip ospf network point-to-point
 exit
!
interface en06
 ip ospf area 0
 ip ospf network point-to-point
 exit
!
 
  • Like
Reactions: scyto
Hi, I am currently researching using NUCs and proxmox and found you article helpful,

1. did you try using the IPv6 approach in the article you linked instead of ipv4 on the TB links - it has many advantages as IP addresses are inherenlty dynamic.

2. did you consider making the network service not start until all the thunderbolt services are started - i am wondering if sequence of startup is why you see issues at boot (i use glusterfs in my current cluster and have seen issues like this before)
 
Ok, i have now installed promox and have 10mins more knowledge than last time/

1. Why bother giving either thunderbolt interface an IP when they can just be placed in a new bridge? (i am asking because i don't know)
2. i note that the names thunderbolt0 and thunderbolt1 are not mapped consistently to the thunderbolt devices:

so `/devices/pci0000:00/0000:00:0d.2/domain0/0-0/0-1/0-1.0/net/` can be thunderbolt1 or thunderbolt0 - what seems to determine the number is which order one plugs the cables in

3. mac addresses are not fixed for the devices for the interface - as such you may not be able to rely on that for reliable matching in that link file (if the link file does what I assume it does?) - thought i am not sure that this matters, it is interesting to me how much is dynamic

4. i also note that at a udev level both `ATTR{ifindex} and ATTR{iflink` attributed are dynamic

lastly did you try this older approach to renaming?
I got it to workf for one of the interfaces but not for both at the same time and i can't figure out why

https://www.shellhacks.com/change-network-interface-name-eth0-eth1-eth2/
# PCI Thunderbolt 0 SUBSYSTEM=="net",ACTION=="add",DRIVERS=="?",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0",ATTR{type}=="1",KERNEL=="thunderbolt0", NAME="eth5" # PCI device thunderbolt1 SUBSYSTEM=="net",ACTION=="add",DRIVERS=="?",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0",ATTR{type}=="1",KERNEL=="thunderbolt1", NAME="eth6"
 
Last edited:
  • Like
Reactions: scyto
Ok i *seem* to have this working now.
You must use thunderbolt0/thunderbolt1 in the udev rule, but use en05/en06 in the script. That is making it a bit confusing.
i found a slightly different approach....

my link files look like this
Code:
/etc/systemd/network/00-thunderbolt0.link
[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en05

/etc/systemd/network/00-thunderbolt1.link
[Match]
Path=pci-0000:00:0d.3
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en06

my udev rule file looks like this

Code:
/etc/udev/rules.d/10-tb-en.rule
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"

seems to be working perfectly! i like this way as it keeps the names consistent, but that doesn't seem important, lol - more was the fun tonight I had learning udev was :)

I selected the move command based on the output i was seeing from `udev monitor` (which i had never looked at in my life, but i noted the final action in the sequence of standing up the renamed net is a move command. And had verified the kernel name was indeed changed by the link file.

My IPv6 OSPF is working too, i will publish my approach as part of this gist, but for now its late and bedtime, and i got bored writing the gist over the last hour lol.

This is my first time ever with proxmox - i still have to get ceph installed and play with that next!

I couldn't have done this without your breadcrumb trail. So many many thanks!
 
Last edited:
hmm lasts night message is waiting for moderation. in the mean time this is my in-progress gist that represents what i did yesterday, this will get updated over the next days/week as i complete the todo list at the end of it - this gist is for others who find this thread (especially folks new to proxmox like me). For me it is self-documentation so i don't forget what i learnt.
https://gist.github.com/scyto/76e94832927a89d977ea989da157e9dc
 
Last edited:
  • Like
Reactions: ualex
@scyto thanks! for the write-up. I got it your message partially in the email notification, but the link is handier :)
 
  • Like
Reactions: scyto
I have spent a few days banging my head against the wall.

1. No matter how i configure the thunderbolt interfaces (en05 and en06) something is blocking almost all ports other than 8006 - i disabled the firewall and that doesn't, help. I set the firewall to enabled and set default in and default out to ACCEPT that doesn't help. This issue blocks SSH, CEPH from working on IPv6 (and i am sure more things too)

2. i have tried FRR OSPF routing for IPv6 (it worked (i could ping), but issue #1 made it of no use until i fix issue #1). I never tried IPv4 (thats the next job given fabricd below failed with IPv4)

3. I have tired FRR fabricd routing for IPv6 + IPv4 - this worked for IPv6 (nodes could pong each other) but utterly failed with IPv4 as it was not creating default routes (so could ping on IPv6 but not on IPv4). Also had issue #1 with this too. This is even worse scenario that issue #2. Also with fabricd the FRR service needs to be bumped after the rename of adapters to work (this wasn't the case with FRR OSPF)

4. Tried static config directly on the en05 and en06 for IPv6 - hit same issue as #1

I have tried various combos of settings on firewall.

This is all PVE8 - i am starting to think there are some fundamental bugs in this new release rather than me being stupid, i wonder if it is worth me trying the last 7.x build?

(oh and i finally answered why sometimes it seems to need thunderbolt and thunderbolt-net in /etc/modules and sometimes not - simple, if it is on no nodes then the thunderbolt networking doesn't start, if it is on just one node and that node comes up first then it will start correctly on all other nodes - tl;dr this should be in put /etc/modules on all nodes to be sure (this seems like a bug IMO)
 
Last edited:
1. Strange, on my Proxmox 8 it works fine, when I have the following rule in "/etc/firewall/cluster.fw":
Code:
IN ACCEPT -i thunderbolt0 -p tcp -log nolog

Other ones I cannot comment on, I only have IPv4 an no ring cluster (just 2 nodes).
 
I wonder what I am missing / did wrong. I disabled the firewall (to be clear it is still running). Iptables implies it is accept all on all ports and interfaces. I turned the default in and out policy to accept. The part that’s confusing me is the web port works just fine on the thunderbolt interfaces but ssh and ceph ports don’t respond.

Did you ever run FRR or just use IPs directly on the interface? And did you set those in your interfaces file and/or the link files?
 
Last edited:
The firewall/iptables of Proxmox is a bit strange, so best to add an explicit rule to allow anything on thunderbolt0/1/en05/en06. Please note port 8006 is implicit enabled, so that always work (ssh should only work in the same OAM subnet, of your first NIC. So thunderbolt will not work by default).

I use direct IP addresses with udev rules, up'ing the NIC and assign the IP. Via "/etc/interfaces" just does not work. I did not rename them (yet), because I do not need them visible in the GUI. If I find this week, I. will try FRR (my NUCs are only used for testing at this moment, a reinstall takes 10 minutes ;-))
 
Ok, done more testing today.

The new enp87s0 interfaces in each NUC work perfectly with IPv6. (the Intel NUCIOALUWS is great way to add another port!)

This means IPv6 is utterly broken on thunderbolt interfaces (no matter if they retain their original naming or are renamed).
I am unclear if this is a proxmox issue or an underlying debian issue.
Workaround = stop trying to use IPv6 and use IPv4 instead... i guess ;-)
 
Last edited:
Yes the Intel add-on is a nice card, i have it too in my NUC13 :)

I would guess an underlying issue, Proxmox just use Debian iptables and their kernel (based on Ubuntu). If you got time, you could try with a vanilla Debian on both nodes and see if it works or does not work. If it works, you could flag this to the Proxmox guys.
 
I am still trying to understand iptables in general

however if this is my ip6tables rule set i don't think anything is being blocked by ip6tables as the default policy is ACCEPT for everything
(my normal iptables looks the same as this too). I agree a repro on debian would be the next thing to do. Which seems tiring, lol.

Code:
root@pve2:~# ip6tables -L -n -v
Chain INPUT (policy ACCEPT 34 packets, 5296 bytes)
 pkts bytes target     prot opt in     out     source               destination       


Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination       


Chain OUTPUT (policy ACCEPT 19 packets, 4164 bytes)
 pkts bytes target     prot opt in     out     source               destination
 
Last edited:
Your ip tables should allow anything for IPv6, it could be a bug in the thunderbolt driver.
 
  • Like
Reactions: scyto
yes, i agree

I am a little concerned that no one from proxmox seems interested in replying to any of my posts (i think they are getting buried because of time zone differences) this makes me question if using proxmox makes sense at all for me if TZ / time of posting is an issue :-(

also discord and reddit and have proved less than fruitful too...

I am certainly not going to test with pure debian unless i know someone from proxmox cares / is paying attention (as that would just be me pushing more water uphill),

what has your experience been in getting help from others?
 
@scyto at this moment, most useful information I am getting from l-koeln and you ... Also, trying to google for thunderbolt-net, is also mission impossible - I could not find anything useful.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!