Using keepalived to access the cluster WI over a single IP

juliosene

New Member
May 20, 2024
13
10
3
Hi!

This is a description of a solution that I'm using in my home lab. Maybe this solution can help other Proxmox users.

I'm using Proxmox for a while. I've created a 3 servers cluster. Despite the fact that every web interface of those servers provides the same information, I didn't have a single IP as unified entrance point. With this scenario, I decided to implement a keepalived load balancer between my servers in these clusters and create a VIP to create a single point to access the cluster.

If you don't know keepalived, this is a simple solution, available via apt from standard repository, that implements a VIP – Virtual IP – between some group of servers. You must define a master node, that will receive requests redirected from the VIP, and backup nodes, that will assume the master function if the master is offline.

For others that like this idea, the process is pretty simple. You probably will spend something between 15 and 30 minutes to implement everything.

Maybe this feature can be part of some next version of Proxmox. I would love to see a native VIP setting at the Datacenter level.

How to:

Diagrama sem nome.drawio (1).png

The steps 1-3 must be done for all PVE servers.

1 – Install keepalived in your servers
# apt install keepalived

2 – Create and edit a keepalived.conf

# nano /etc/keepalived/keepalived.conf

Add the following content to this file, replacing:
#STATE# -> must be MASTER or BACKUP
#INTERFACE# -> interface name used to the cluster, probably vmbr0
#MYIP# -> PVE IP address
#SERVERSIPS# -> other PVE servers IPs. One IP per line.
#PASSWD# -> an 8 char password that keepalived will be used to authenticate. Must be the same for all PVE servers.
#VIP#/#CIR# -> your VIP. Be sure that this IP is free to use as a static IP in your network.

Code:
vrrp_instance VI_1 {
    state #STATE#
    interface #INTERFACE#
    virtual_router_id 55
    priority #PRIORITY#
    advert_int 1
    unicast_src_ip #MYIP#
    unicast_peer {
        #SERVERSIPS#
    }
    authentication {
        auth_type PASS
        auth_pass #PASSWD#
    }
    virtual_ipaddress {
        #VIP#/#CIR#
    }
}


Save and close the file (ctr+x -> y).

3 – Restart keepalived
# service keepalived restart

4 – Test your VIP address
Open your browser and try to access https://<YOUR_VIP>:8006/

If you have access granted to Proxmox WI, your VIP is working fine in your master node.

Now, on PVE Master node

# service keepalived stop

Refresh https://<YOUR_VIP>:8006/

At this situation, one of the backup nodes must handle the VIP and you must have access to Proxmox WI.

At PVE Master
# service keepalived start
 
Last edited:
Very nice first post :-)
 
Personally I don't see the point of this: just always use the same node for admin tasks and use another one if that node is down. That's the beauty of a multimaster cluster :)

In some cases when I do need extra authentication/filtering/whatever, I do use HAProxy in a VM in front of the cluster and that deals with balancing. The VM is in HA so it can be moved to another host if the one running it goes down.

Make sure the interface running the keepalived packets is fully private, as they are easily forged to disrupt keepalived normal behavior.
 
Personally I don't see the point of this: just always use the same node for admin tasks and use another one if that node is down. That's the beauty of a multimaster cluster :)

In some cases when I do need extra authentication/filtering/whatever, I do use HAProxy in a VM in front of the cluster and that deals with balancing. The VM is in HA so it can be moved to another host if the one running it goes down.

Make sure the interface running the keepalived packets is fully private, as they are easily forged to disrupt keepalived normal behavior.
Thanks for your reply. I really appreciate your point of view and your concern.

In my case, it is running in a home lab. That's a kind of environment to test things, right? ;)

So, my point of view is that technology came to simplify things. Sometimes making very complex things easy, sometimes making repeatable simple things a little bit easy and, at the end of the day, you will save some time.

The idea of this post is to make a repeatable thing a little bit easy. If your infrastructure is accessed internally just for you, maybe this is not a concern. But if you must share an IP address with a team or another company (to enable access in a firewall, for example), maybe the content of this post is useful.

Using a VM with HAproxy or Nginx to redirect the connection to Proxmox servers WI is nice and works. Sounds like Xcp-ng and Xen Orchestra but running the WI at the end point. But you are adding one more box in your communication with the Proxmox Wi. At the end, it means one more device that you must keep updated and be sure that HA is working fine. But, for sure, it is an option. Every solution has pros and cons. The best one depends on the use case scenario.
 
I love this idea!

Have you looked into how you might add SSL certificate(s) for the domain name of the VIP? In my home lab, I have the Proxmox hosts configured to automatically get and update certificates from Let's Encrypt for each of them, so I am not using a self signed cert and having that annoying pop-up in my browsers. It would be great to have the VIP behind a domain name with an SSL certificate. The challenge would be setting it up so that it works on all the hosts using the same cert.
 
You can create one certificate per PVE server with the same domain name, something like proxmox.mydomain.com. Keepalived will change the owner of the VIP address just when the master fails.
 
I just wanted to thank you for this. I just did it on my cluster and it is nice. Now I can just access my cluster with one IP address (masked behind a FQDN), and if I am doing maintenance or a host goes down, I can still access it witht he same FQDN without worrying about the traffic going to the downed host and timing out.
 
  • Like
Reactions: juliosene
How do you set the priority? Should each node be different? Should the master be low or high?

EDIT: Nevermind, I figured it out. Master is high, all others are lower. Highest priority wins.
 
Last edited:
  • Like
Reactions: juliosene
So this is great. Easy setup. I use Caddy as a reverse proxy so my SSL certs are auto renewed. I then block anything that is not coming from the local LAN so it's not publicly accessible. I am now using the virtual IP in Caddy and it is transparent, and just works, so I can access my cluster via a URL that has a legit SSL cert.

Thanks for this, it is much appreciated!
 
How do you set the priority? Should each node be different? Should the master be low or high?

EDIT: Nevermind, I figured it out. Master is high, all others are lower. Highest priority wins.


Just one additional tip: If you have different hardware and varying usage (memory and CPU), my advice is to use the host with the least usage as your master node. This may not be your most powerful host, but it will be the most stable.
 
Just one additional tip: If you have different hardware and varying usage (memory and CPU), my advice is to use the host with the least usage as your master node. This may not be your most powerful host, but it will be the most stable.
Thanks. In my case, I put it on the same node that Caddy is running on. This made the most sense to me, but in the end we have spread the load pretty evenly across our 3 Proxmox servers in this particular cluster.
 
Maybe this feature can be part of some next version of Proxmox. I would love to see a native VIP setting at the Datacenter level.
Thanks for guide, had same idea. Yes native setting at Datacenter level would be very nice, everything required is almost already there.