Virtual IP

Configure all three IP addresses of the nodes for one host name in your (redudant, high available) local DNS? And then use that host name to connect to the Proxmox web GUI.
 
Configure all three IP addresses of the nodes for one host name in your (redudant, high available) local DNS? And then use that host name to connect to the Proxmox web GUI.
Wouldn't there be a 30% chance that an IP of the downed node is served? The more common approach is an http proxy, i.e. HAproxy or similar load-balancer.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: leesteken
Since in a pve environment any node can serve as api head, the simplest approach is to use a reverse proxy, typically run on your router but you can use a vm for this too. ngnix or haproxy work fine for this, you can use any of the million tutorials available.
 
  • Like
Reactions: leesteken
but what does the proxmox tim recommend to do this? keepalived they say it is not recommended
I think what the author of that document meant to say is : it is not endorsed.

There are many external solutions, none of them is endorsed or supported directly by PVE team. I would imagine those solutions that do not install extra software directly on the hypervisor might be preferred.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Markku
I am trying to reach my datacenter in all ways from outside my network, even if any node dies
OK, so you have a router controlling NAT between your outside and inside traffic, right? start by looking at that documentation for a reverse proxy feature (might be called virtual server or similar) since, by definition, you can only reach your ROUTER from outside to begin with.
 
  • Like
Reactions: LnxBil
For me keepalived is the missing cluster technology to add to Corosync and Ceph and Proxmox HA etc.

I setup my clusters with IPs that start at ...1 and use keepalived to do ...0. The cluster DNS name is pve and the nodes are pve<x>.

VMware generally keeps ssh stopped and discourages its use at all. With a vSphere cluster, you have a vCentre which is a whopping great lump of a VM (12GB+ RAM, 4+ vCPU, 2TB thin provisioned disk) and it takes ages to boot. There are two Tomcat instances, one of which manages the VM itself. The other is a huge monster, which to be fair is rather quicker these days that it was.

On Proxmox all nodes are equal and have a light touch GUI built in, already working through a proxy aware webby thingie. To make a cluster work all you really need is to move an IP address, instead of a VM. VRRP and keepalived work beautifully for this.

If you keep all node root passwords the same then you get a web GUI that works intuitively and seamlessly across nodes including the webby terminal. Now, firewall off access to ssh from everywhere other than the cluster nodes and enforce MFA on the web GUI.

Then you have better than VMware login security and you don;t have to wait for the bloody things to wake up.
 
That was my other option: running reverse proxy VMs on the cluster and VRRP on those VMs. But, since I don't have other reverse proxy needs at the moment, I configured VRRP right on the PVE nodes to see how far this simple solution gets me. PVE nodes haven't yet cared about the occasional additional IP address on one of the interfaces.
 
It seems to me that keepalived could be the fastest and most functional solution to move just a simple IP precisely, with the necessary precautions and security, I think I'll test this solution, I just wanted to confirm that it can't disturb the entire proxmox system