Sync issues after Lets Encrypt Renewal

dthompson

Well-Known Member
Nov 23, 2011
146
14
58
Canada
www.digitaltransitions.ca
Hi all,

I have an issue where whenever my 2 node PMG cluster renews the certificates the cluster stops syncing.

On the master I get this:
pmgcm status
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
swarmx1(1) 192.168.11.218 master A 44 days 12:49 0.67 30% 5%
swarmx2(8) 192.168.11.219 node ERROR: fingerprint 'C4:09:37:B3:2E:21:A0:D4:D4:95:A7:00:C8:2E:C0:65:4A:F5:E6:AC:60:66:C6:5B:2F:C6:33:FE:AD:60:F4:56' not verified, abort!


and on the slave I get this:
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
swarmx2(9) 192.168.11.219 node S 51 days 08:59 1.25 30% 3%
swarmx1(1) 192.168.11.218 master ERROR: fingerprint '83:F1:2B:1A:6F:F6:7F:C1:60:3C:2B:8F:0E:FE:A1:D7:9D:9F:B2:A4:10:B6:90:AC:E8:AD:80:01:62:5C:DC:C0' not verified, abort!

Once I delete the node and then re-add it on the master it shows the following:
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
swarmx2(10) 192.168.11.219 node S 00:06 0.87 27% 3%
swarmx1(1) 192.168.11.218 master S 44 days 13:22 1.57 31% 5%

But on the slave it sees:
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK


swarmx1(1) 192.168.11.218 master ERROR: fingerprint '83:F1:2B:1A:6F:F6:7F:C1:60:3C:2B:8F:0E:FE:A1:D7:9D:9F:B2:A4:10:B6:90:AC:E8:AD:80:01:62:5C:DC:C0' not verified, abort! - - -% -%
swarmx2(10) 192.168.11.219 node S 00:07 0.77 28% 3%

It seems to me that is the fingerprint that has the issue but I'm not sure how to get around it.
What do I need to do in order to get this service setup so that it doesn't error out like this?
 
Thanks Tom. So is all I have to is run the openssl command once the renew happens or do I still need to go through the removal and re-adding of the cluster on the slave servers in order to get this working properly?
 
As written in the docs.

You need to get the new fingerprint with the openssl command and second, you need to add/change the new fingerprint in you cluster.conf file (see /etc/pmg/cluster.conf)
 
Awesome. Thank you. So if I undestand correctly, the steps would be as follows:

1.) Run this command: openssl x509 -in /etc/pmg/pmg-api.pem -noout -fingerprint -sha256
2.) Take the output of the command and then add it to the /etc/pmg/cluster.conf in the master section

master: 1 fingerprint <output goes here>

3.) The remove the slave node:
pmgcm status
pmgcm delete (id)

4) Then on the slave node, rejoin it to the cluster via the GUI (or CLI)


Is thats the case, then I will do just that, make some notes and mark this as solved so anyone in my situation will be able to get a solution quickly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!