Zerotier cluster between hosts. ZT interfaces not working

adzza

New Member
Mar 15, 2021
3
0
1
41
Hi All,

New to Proxmox. Looking to setup a cluster of hosts with Zerotier. I have installed ZT and hosts can see each other and ping each other. All done right? Not quite.

The ZT interface is not selectable in the clustering menus, so I need to add some code to get this to appear. I scoured this forum and found the following:

- check what is ZT interface name(command line in PMX as root):

ifconfig

In my case is something like : ztnxxxxxx

- edit from command line the /etc/network/interfaces file, and add this snippet:

auto vmbr77
iface vmbr77 inet static
address 10.242.x1.x2
netmask 16
bridge-ports ztnxxxxxx
bridge-stp off
bridge-fd 0

The 10.242.x1.x2 is what IP do you setup in the zerotier.com Network for your PMX host!



I have added the following so that the ZT interface is selectable in the cluster menu and looks like its working, but infact this breaks the interface and it is no longer reachable and hosts can no longer ping each other via their Zerotier addresses.

Hoping someone can assist? I am clearly missing something here as I essentially break the interface/routing when adding the above code. Thanks in advance!
 
Hi All,

New to Proxmox. Looking to setup a cluster of hosts with Zerotier. I have installed ZT and hosts can see each other and ping each other. All done right? Not quite.

The ZT interface is not selectable in the clustering menus, so I need to add some code to get this to appear. I scoured this forum and found the following:

- check what is ZT interface name(command line in PMX as root):

ifconfig

In my case is something like : ztnxxxxxx

- edit from command line the /etc/network/interfaces file, and add this snippet:

auto vmbr77
iface vmbr77 inet static
address 10.242.x1.x2
netmask 16
bridge-ports ztnxxxxxx
bridge-stp off
bridge-fd 0

The 10.242.x1.x2 is what IP do you setup in the zerotier.com Network for your PMX host!



I have added the following so that the ZT interface is selectable in the cluster menu and looks like its working, but infact this breaks the interface and it is no longer reachable and hosts can no longer ping each other via their Zerotier addresses.

Hoping someone can assist? I am clearly missing something here as I essentially break the interface/routing when adding the above code. Thanks in advance!
Change ip on vmbr77 interface to any different free ip from the same subnet.
If I understand you well, now you have the same address on ztnxxxxx interface and on vmbr77 interface.
You make ip confilict
 
Change ip on vmbr77 interface to any different free ip from the same subnet.
If I understand you well, now you have the same address on ztnxxxxx interface and on vmbr77 interface.
You make ip confilict
Tried this but same issue sadly. As soon as I touch /interfaces it stops working
 
Last edited:
Hi,

It works for me :

Bash:
nano /etc/network/interfaces

Then add your new bridge interface :

Bash:
auto zt0
iface zt0 inet static
        address 10.X.X.X
        netmask 255.255.255.0
        bridge-ports ztxxxxxx
        bridge-stp off

Replace the ip address as your need and the zt interface name.

ctrl-X and save.

Then, reload interface

Bash:
ifreload -a

Et voilà...

Zt0 appears in Proxmox GUI and you can use to Corosync.
 
Hello any news about this ? I have the same situation, what jojojou says works until you restart the host after the reboot no longer works. If anyone knows of any other way to connect promox remote hosts to a cluster?
 
I have it working without any configuration of the interfaces file. This is how:
1. Make sure that Zero Tier is installed in the cluster/node(s).
2. Optional but very handy: create DNS entry with Zero Tier IP for cluster/node(s) in your DNS server, if not possible just create them in each servers /etc/hosts file.
3. From the computer that you want to create the cluster, run the create cluster command from CLI. DO NOT USE THE WEB INTERFACE.
# pvecm create CLUSTERNAME
4. Look for the cluster fingerprint and take note
5. From the computer that you want to add as node, run the cluster join command.
Bash:
#pvecm add CLUSTERSERVER -fingerprint FI:NG:ER:PR:IN:TT
Note 1: here CLUSTERSERVER is the ip/dns name for the computer that is running the node, this is why I recommend doing step 2.

Note 2: when running the join command is expected that all the previous requirements are met https://pve.proxmox.com/wiki/Cluster_Manager#_requirements

This is how I have multiple clusters working with Zero Tier, if you have further questions or need a helping hand, don't hesitate in reach out.

FZ. -
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!