I have multiple proxmox hypervisors on dedicated servers you can get from cloud providers. Each get their own public ip. I've linked my hypervisors up using wireguard so that have a private ip they can reach each other with.
The problem is, and I've read this somewhere else on the forums: that clustering only officially 100% works with nodes being on the same physical layer 2 network.
Adding a node to a cluster using a private wireguard ip instead of its public ip is possible. You have to manually edit the corosync.conf to use the private wireguard ip once the node tries to join the cluster. Unfortunately, this only solves some issues. I use a firewall to block all traffic coming into each node in the public ip address (accept ssh). It seems its impossible to change /etc/pve/.members, which always contains the public ip of each node (even though you can hack corosync.conf to work, .members is read only). Because I block all incoming ports on the public ip, some functions such as accessing info on another node on the webgui or SPICE terminal do not work, as they get blocked. Is there anyway to hack together something so that the nodes try to access the other nodes private ip instead of their public ip? Obviously it seems proxmox has yet to add official support for L3 clustering.
I believe one hack to fix this would be to dnat the destination public ip to the private wireguard ip of every other node, this would need to be configured manually on each node. Alternatively, just allow the public ip of every other node in the firewall, but this means their vms may have access to the ports of the hypervisor.
The problem is, and I've read this somewhere else on the forums: that clustering only officially 100% works with nodes being on the same physical layer 2 network.
Adding a node to a cluster using a private wireguard ip instead of its public ip is possible. You have to manually edit the corosync.conf to use the private wireguard ip once the node tries to join the cluster. Unfortunately, this only solves some issues. I use a firewall to block all traffic coming into each node in the public ip address (accept ssh). It seems its impossible to change /etc/pve/.members, which always contains the public ip of each node (even though you can hack corosync.conf to work, .members is read only). Because I block all incoming ports on the public ip, some functions such as accessing info on another node on the webgui or SPICE terminal do not work, as they get blocked. Is there anyway to hack together something so that the nodes try to access the other nodes private ip instead of their public ip? Obviously it seems proxmox has yet to add official support for L3 clustering.
I believe one hack to fix this would be to dnat the destination public ip to the private wireguard ip of every other node, this would need to be configured manually on each node. Alternatively, just allow the public ip of every other node in the firewall, but this means their vms may have access to the ports of the hypervisor.