Anyone installed tailscale.com (WireGuard "vpn" based) in the host hypervisor?

I have finished now to correctly configure tailscale.com ACL's on all my hardware machines and promox vps's (not on the proxmox hypervisor)

It' works very well , wireguard works perfectly and is fast.

Divided 3 networks
1st super admin (me)
2nd vps-servers-admin group access
3rd , non super admin users with access only to vps's on ports 443 and 1935

Simulated 3 reboots and all works correctly.

(I can't find the way to install it on with LXC containers) ..
root@FFMEG:~# tailscale up
2020/10/17 14:58:53 Failed to connect to connect to tailscaled. (safesocket.Connect: dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory)

My question is, can create problems on the proxmox hypervisor? It can be very useful for backup procedures and manual migrations (on non cluster proxmox configurations)
 
Last edited:
Hi. I have installed tailscale within a test proxmox host apparently without problems:

7: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq state UNKNOWN group default qlen 500
link/none
inet 100.101.121.11/32 scope global tailscale0
valid_lft forever preferred_lft forever
inet6 fe80::7e34:a900:be32:992d/64 scope link stable-privacy
valid_lft forever preferred_lft forever

Obviously the eth device don't appear in the proxmox network devices.. but If I want to filter using proxmox firewall???


Code:
root@px:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 78:2b:cb:62:8b:18 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7a2b:cbff:fe62:8b18/64 scope link
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 78:2b:cb:62:8b:19 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7a2b:cbff:fe62:8b19/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c2:65:be:34:75:91 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 78:2b:cb:62:8b:18 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.20/24 brd 192.168.20.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe62:8b18/64 scope link
       valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b6:c6:1c:d6:3a:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.0/24 brd 192.168.30.255 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::b4c6:1cff:fed6:3a4c/64 scope link
       valid_lft forever preferred_lft forever
7: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq state UNKNOWN group default qlen 500
    link/none
    inet 100.101.121.11/32 scope global tailscale0
       valid_lft forever preferred_lft forever
    inet6 fe80::7e34:a900:be32:992d/64 scope link stable-privacy
       valid_lft forever preferred_lft forever
8: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 5e:c2:ca:73:88:4b brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.0/24 brd 192.168.50.255 scope global vmbr3
       valid_lft forever preferred_lft forever
    inet6 fe80::5cc2:caff:fe73:884b/64 scope link
       valid_lft forever preferred_lft forever
9: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether ca:0e:a5:96:15:46 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.0/24 brd 192.168.40.255 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 fe80::c80e:a5ff:fe96:1546/64 scope link
       valid_lft forever preferred_lft forever
10: vmbr4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 5a:8b:ac:87:41:4a brd ff:ff:ff:ff:ff:ff
    inet 192.168.60.0/24 brd 192.168.60.255 scope global vmbr4
       valid_lft forever preferred_lft forever
    inet6 fe80::588b:acff:fe87:414a/64 scope link
       valid_lft forever preferred_lft forever
11: vmbr100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 78:2b:cb:62:8b:19 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.21/24 brd 192.168.20.255 scope global vmbr100
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe62:8b19/64 scope link
       valid_lft forever preferred_lft forever
12: vmbr5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 6a:43:5a:37:db:4a brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.0/24 brd 192.168.100.255 scope global vmbr5
       valid_lft forever preferred_lft forever
    inet6 fe80::6843:5aff:fe37:db4a/64 scope link
       valid_lft forever preferred_lft forever
13: tap113i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq master ovs-system state UNKNOWN group default qlen 1000
    link/ether 36:e6:80:9f:fc:5f brd ff:ff:ff:ff:ff:ff
14: tap113i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq master ovs-system state UNKNOWN group default qlen 1000
    link/ether ee:c1:5a:b7:4a:72 brd ff:ff:ff:ff:ff:ff
15: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UNKNOWN group default qlen 1000
    link/ether de:e4:e1:1d:8d:da brd ff:ff:ff:ff:ff:ff
16: veth106i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr106i0 state UP group default qlen 1000
    link/ether fe:0b:b2:46:5c:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0
17: fwbr106i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:0b:b2:46:5c:13 brd ff:ff:ff:ff:ff:ff
18: fwln106o0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr106i0 state UNKNOWN group default qlen 1000
    link/ether 52:65:74:c1:f8:3e brd ff:ff:ff:ff:ff:ff
19: veth104i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether fe:1d:40:f4:9d:cd brd ff:ff:ff:ff:ff:ff link-netnsid 1
20: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq master ovs-system state UNKNOWN group default qlen 1000
    link/ether ee:b3:82:9b:54:74 brd ff:ff:ff:ff:ff:ff
21: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq master ovs-system state UNKNOWN group default qlen 1000
    link/ether 8a:aa:3e:fb:14:4c brd ff:ff:ff:ff:ff:ff
22: veth108i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether fe:b1:cb:63:63:92 brd ff:ff:ff:ff:ff:ff link-netnsid 2
root@px:~#
 
installed in a production proxmox host and tailscale private ip configured as any wan ip with the proxmox firewall rules.

all works correctly.

bye bye vpn and static ip firewall rules!! :)
 
It works well, but I have only 2 problems:
1) I can't start on LXC containers
2) I can't get direct connections for vps behind opnsense firewall, and works only using relay with slow speed.
 
I installed it fine in the hypervisor and authenticated it with my tailscale account and I can see it from other machines, but can't connect to the web gui.

edit: Helps if you use https://
 
Last edited:
  • Like
Reactions: taurolyon
Hi, I'm having hard time finding any guide on how to install the tailscale directly on the supervisor machine to be able to access console.

I'm on the 6.4-4

Code:
root@pve:~# uname -a
Linux pve 5.4.106-1-pve #1 SMP PVE 5.4.106-1 (Fri, 19 Mar 2021 11:08:47 +0100) x86_64 GNU/Linux

Which tailscale distribution did you use?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!