[SOLVED] ERROR: 401 permission denied - invalid PMG ticket

H.c.K

Active Member
Oct 16, 2019
68
3
28
33
Hi,
i have a cluster. i failed to new node join.

master
root@pmg3:~# pmgversion -v
proxmox-mailgateway: 6.1-1 (API: 6.1-2/53ccdd75, running kernel: 5.3.10-1-pve)
pmg-api: 6.1-2
pmg-gui: 2.1-3
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.3.10-1-pve: 5.3.10-1
libarchive-perl: 3.3.3-1
libjs-extjs: 6.0.1-10
libjs-framework7: 4.4.7-1
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-8
libpve-http-server-perl: 3.0-3
libxdgmime-perl: 0.01-5
lvm2: 2.03.02-3
pmg-docs: 6.1-2
proxmox-mini-journalreader: 1.1-1
proxmox-spamassassin: 3.4.2-13
proxmox-widget-toolkit: 2.0-9
pve-firmware: 3.0-4
pve-xtermjs: 3.13.2-1
zfsutils-linux: 0.8.2-pve2

root@pmg3:~# pmgcm status
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
pmg7(6) 1.1.1.123 node ERROR: 401 permission denied - invalid PMG ticket - - -% -% -> -> dedicated (I added it in the same way but it was not added. it gives an error. the room gave the same error as I have more than one dedicated server.)
pmg3(1) 1.1.1.1.7 master S 8 days 15:19 0.88 45% 19% -> VM
pmg4(2) 1.1.1.18 node S 8 days 15:27 0.07 38% 10% -> VM
pmg6(5) 1.1.1.93 node S 1 day 01:05 0.04 13% 4% -> dedicated (this server has been added successfully. There was no problem.)



node:
root@pmg7:~# pmgversion -v
proxmox-mailgateway: 6.1-1 (API: 6.1-2/53ccdd75, running kernel: 5.3.10-1-pve)
pmg-api: 6.1-2
pmg-gui: 2.1-3
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.3.10-1-pve: 5.3.10-1
libarchive-perl: 3.3.3-1
libjs-extjs: 6.0.1-10
libjs-framework7: 4.4.7-1
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-8
libpve-http-server-perl: 3.0-3
libxdgmime-perl: 0.01-5
lvm2: 2.03.02-3
pmg-docs: 6.1-2
proxmox-mini-journalreader: 1.1-1
proxmox-spamassassin: 3.4.2-13
proxmox-widget-toolkit: 2.0-9
pve-firmware: 3.0-4
pve-xtermjs: 3.13.2-1
zfsutils-linux: 0.8.2-pve2

root@pmg7:~# pmgcm status
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
pmg4(2) 1.1.1.18 node ERROR: 401 permission denied - invalid PMG ticket - - -% -%
pmg6(5) 1.1.1.93 node ERROR: 401 permission denied - invalid PMG ticket - - -% -%
pmg3(1) 1.1.1.17 master ERROR: 401 permission denied - invalid PMG ticket - - -% -%
pmg7(6) 1.1.1.123 node S 1 day 01:04 0.16 27% 4%

root@pmg7:~# pmgcm join 1.1.1.17
Enter password: **********************
The authenticity of host '1.1.1.17' can't be established.
X509 SHA256 key fingerprint is C5:6F:DB:50:61:***********************8:92:7C:79:24:F4.
Are you sure you want to continue connecting (yes/no)? yes
stop all services accessing the database
save new cluster configuration
cluster node successfully joined
updated /etc/pmg/cluster.conf
updated /etc/pmg/pmg-authkey.key
updated /etc/pmg/pmg-authkey.pub
updated /etc/pmg/pmg-csrf.key
updated /etc/pmg/user.conf
updated /etc/pmg/domains
updated /etc/pmg/mynetworks
updated /etc/pmg/transport
updated /etc/pmg/pmg.conf
copying master database from '1.1.1.17'
copying master database finished (got 10583203 bytes)
delete local database
could not change directory to "/root": Permission denied
create new local database
could not change directory to "/root": Permission denied
insert received data into local database
creating indexes
run analyze to speed up database queries
could not change directory to "/root": Permission denied
ANALYZE
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
could not change directory to "/root": Permission denied
syncing quarantine data
syncing quarantine data finished



***********
Last status:
root@pmg3:~# pmgcm status
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
pmg6(5) 1.1.1.93 node S 1 day 01:23 0.02 13% 4% -> New Dedicated Machine
pmg8(7) 1.1.1.129 node ERROR: 401 permission denied - invalid PMG ticket - - -% -% -> New Dedicated Machine
pmg9(8) 1.1.1.135 node S 1 day 01:23 0.14 13% 4% -> New Dedicated Machine
pmg7(6) 1.1.1.123 node ERROR: 401 permission denied - invalid PMG ticket - - -% -% -> New Dedicated Machine
pmg3(1) 1.1.1.17 master S 8 days 15:37 0.85 47% 19% -> Virtual Machine
pmg5(9) 1.1.1.63 node ERROR: 401 permission denied - invalid PMG ticket - - -% -% -> New Dedicated Machine
pmg4(2) 1.1.1.18 node S 8 days 15:45 0.28 39% 10% -> Virtual Machine

pmg5 = pmgcm join 1.1.1.17 = failed
pmg6 = pmgcm join 1.1.1.17 = ok
pmg7 = pmgcm join 1.1.1.17 = failed
pmg8 = pmgcm join 1.1.1.17 = failed
pmg9 = pmgcm join 1.1.1.17 = ok
all have the same version pmg.
 
Last edited:
Why would you need so many nodes in one cluster?
(it's a rather uncommon setup - and I would not be sure that it scales too well - plus I'm not sure how many mails you would have to process to need so many machines.....)

Is there a log I can check for this problem?
check the journal and syslog:
* `journalctl -f` (for following the current journal)
* `less /var/log/syslog` (for the syslog)
additionally maybe the pmgproxy log contains some hints:
* `less /var/log/pmgproxy/pmgproxy.log`

I hope this helps!
 
  • Like
Reactions: H.c.K
Why would you need so many nodes in one cluster?
(it's a rather uncommon setup - and I would not be sure that it scales too well - plus I'm not sure how many mails you would have to process to need so many machines.....)


check the journal and syslog:
* `journalctl -f` (for following the current journal)
* `less /var/log/syslog` (for the syslog)
additionally maybe the pmgproxy log contains some hints:
* `less /var/log/pmgproxy/pmgproxy.log`

I hope this helps!

We are considering a large structure. Currently I could not add 3 node servers to the cluster structure. Some logs are as follows:

root@pmg8:~# tail -f /var/log/syslog
Jan 22 12:44:27 pmg8 pmgmirror[3036]: database sync 'pmg9' failed - large time difference (> 10394 seconds) - not syncing
Jan 22 12:44:27 pmg8 pmgmirror[3036]: database sync 'pmg7' failed - large time difference (> 129 seconds) - not syncing
Jan 22 12:44:27 pmg8 pmgmirror[3036]: database sync 'pmg5' failed - large time difference (> 54 seconds) - not syncing
Jan 22 12:44:27 pmg8 pmgmirror[3036]: database sync 'pmg3' failed - large time difference (> 11148 seconds) - not syncing
Jan 22 12:44:27 pmg8 pmgmirror[3036]: database sync 'pmg6' failed - large time difference (> 10891 seconds) - not syncing
Jan 22 12:44:27 pmg8 pmgmirror[3036]: cluster syncronization finished (6 errors, 3.50 seconds (files 0.00, database 3.34, config 0.16))
Jan 22 12:45:09 pmg8 pmg-smtp-filter[3019]: starting database maintainance
Jan 22 12:45:09 pmg8 pmg-smtp-filter[3019]: end database maintainance (4 ms)
Jan 22 12:45:13 pmg8 pmgpolicy[3043]: starting policy database maintainance (greylist, rbl)
Jan 22 12:45:13 pmg8 pmgpolicy[3043]: end policy database maintainance (9 ms, 4 ms)
Jan 22 12:46:19 pmg8 postfix/postscreen[18472]: CONNECT from [45.143.222.199]:54792 to [1.1.1.129]:25
Jan 22 12:46:19 pmg8 postfix/postscreen[18472]: PREGREET 11 after 0.05 from [45.143.222.199]:54792: EHLO User\r\n
Jan 22 12:46:20 pmg8 postfix/postscreen[18472]: DISCONNECT [45.143.222.199]:54792
Jan 22 12:46:23 pmg8 pmgmirror[3036]: starting cluster syncronization
Jan 22 12:46:23 pmg8 pmgmirror[3036]: database sync 'pmg4' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 12:46:26 pmg8 pmgmirror[3036]: database sync 'pmg3' failed - large time difference (> 11148 seconds) - not syncing
Jan 22 12:46:26 pmg8 pmgmirror[3036]: database sync 'pmg6' failed - large time difference (> 10891 seconds) - not syncing
Jan 22 12:46:26 pmg8 pmgmirror[3036]: database sync 'pmg5' failed - large time difference (> 55 seconds) - not syncing
Jan 22 12:46:26 pmg8 pmgmirror[3036]: database sync 'pmg7' failed - large time difference (> 129 seconds) - not syncing
Jan 22 12:46:26 pmg8 pmgmirror[3036]: database sync 'pmg9' failed - large time difference (> 10393 seconds) - not syncing
Jan 22 12:46:26 pmg8 pmgmirror[3036]: cluster syncronization finished (6 errors, 3.49 seconds (files 0.00, database 3.32, config 0.16))
Jan 22 12:47:09 pmg8 pmg-smtp-filter[3019]: starting database maintainance
Jan 22 12:47:09 pmg8 pmg-smtp-filter[3019]: end database maintainance (4 ms)
Jan 22 12:47:23 pmg8 pmgpolicy[3043]: starting policy database maintainance (greylist, rbl)
Jan 22 12:47:23 pmg8 pmgpolicy[3043]: end policy database maintainance (13 ms, 2 ms)

root@pmg4:~# cat /var/log/syslog | grep pmg8
Jan 22 00:00:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 00:02:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 00:04:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 00:06:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 00:08:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 00:10:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 00:12:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10852 seconds) - not syncing
Jan 22 00:14:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10853 seconds) - not syncing
Jan 22 00:16:44 pmg4 pmgmirror[4086]: database sync 'pmg8' failed - large time difference (> 10853 seconds) - not syncing

I deleted it many times and added it again, but I did not get any results. How can we solve this problem?
 
large time difference (> 10852 seconds) - not syncing
configure NTP on all servers and make sure they are synchronized

I would again suggest to reconsider your deployment, and not start out with so many nodes, but add them as they are necessary.

I hope this helps!
 
  • Like
Reactions: H.c.K
configure NTP on all servers and make sure they are synchronized

I would again suggest to reconsider your deployment, and not start out with so many nodes, but add them as they are necessary.

I hope this helps!

@Stoiko Ivanov thanks for helping.

Due to the time difference between the servers, they did not enter the cluster configuration. They successfully got inside with the settings below. Now they are all in clusters.

root@pmg5:~# timedatectl set-ntp no
root@pmg5:~# timedatectl set-time 12:49:10
root@pmg5:~# timedatectl set-ntp yes
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!