Hi,
My setup:
I have 3 VPS from HOSTKEY.
Each of the VPS has just 1 ethernet interface.
This is what I've done so far:
I have installed Proxmox on the 3 VPS.
I have installed TailScale on all the 3 Proxmox servers.
I have created a Proxmox cluster. I can see that the 3 servers are part of the cluster. I was able to migrate a VM that was running on server1 to server3.
Here is the issue I'm facing:
VMs are not able to access the internet.
Output of /etc/network/interfaces from server1
Output of /etc/network/interfaces from server2
Same as above (just the ens1 interface address is different)
Output of /etc/network/interfaces from server3
Same as above (just the ens1 interface address is different)
As you can see, in my setup, ens1 gets the public ip-address provided to me by the hosting provider.
I then created vmbr0, and gave it an ip-address that falls in the private ip-address range of TailScale. (assuming this will be used by the VMs - that way, VMs in different servers will be able to talk to each other using private ip-address)
By referring some online documentation, I think I added NAT config as well (as you can see above) - thinking that will ensure the VMs can access the internet.
But I'm really not sure if what I have done is correct or not.
What I'm trying to achieve is just 2 things:
1- VMs should be able to access the internet
2- VMs in each server should be able to talk to VMs in other servers in the cluster using private ip address
Note:
My servers have just 1 ethernet interface. I'm not sure if that is going to be blocker for me in achieving what I've mentioned above.
Can someone please help me.
Thanks
My setup:
I have 3 VPS from HOSTKEY.
Each of the VPS has just 1 ethernet interface.
This is what I've done so far:
I have installed Proxmox on the 3 VPS.
I have installed TailScale on all the 3 Proxmox servers.
I have created a Proxmox cluster. I can see that the 3 servers are part of the cluster. I was able to migrate a VM that was running on server1 to server3.
Here is the issue I'm facing:
VMs are not able to access the internet.
Output of /etc/network/interfaces from server1
Code:
auto lo
iface lo inet loopback
auto ens1
iface ens1 inet static
address x.y.63.58/24
gateway x.y.63.1
auto vmbr0
iface vmbr0 inet static
address 100.64.0.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '100.64.0.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '100.64.0.0/24' -o eno1 -j MASQUERADE
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
Output of /etc/network/interfaces from server2
Same as above (just the ens1 interface address is different)
Output of /etc/network/interfaces from server3
Same as above (just the ens1 interface address is different)
As you can see, in my setup, ens1 gets the public ip-address provided to me by the hosting provider.
I then created vmbr0, and gave it an ip-address that falls in the private ip-address range of TailScale. (assuming this will be used by the VMs - that way, VMs in different servers will be able to talk to each other using private ip-address)
By referring some online documentation, I think I added NAT config as well (as you can see above) - thinking that will ensure the VMs can access the internet.
But I'm really not sure if what I have done is correct or not.
What I'm trying to achieve is just 2 things:
1- VMs should be able to access the internet
2- VMs in each server should be able to talk to VMs in other servers in the cluster using private ip address
Note:
My servers have just 1 ethernet interface. I'm not sure if that is going to be blocker for me in achieving what I've mentioned above.
Can someone please help me.
Thanks