Unable to add third node to the cluster

shimi

New Member
Sep 22, 2021
6
0
1
51
Hello everybody,

I have been running two node cluster for quite some time. Both nodes are VMs on PVE in the same subnet. Both nodes are on PMG 7.0-1 (pmg-api 7.0-7).

I am struggling with adding a third node. I was following steps from both online documentation and this forum. I was able to add a third node (which is a freshly installed VM in a remote location connected to other nodes via wireguard vpn tunnel). The third node seems to synchronize correctly:

Code:
Sep 23 11:30:27 pmg-office systemd[1]: Starting Proxmox Mail Gateway Database Mirror Daemon...
Sep 23 11:30:30 pmg-office pmgmirror[7948]: starting server
Sep 23 11:30:30 pmg-office systemd[1]: Started Proxmox Mail Gateway Database Mirror Daemon.
Sep 23 11:32:31 pmg-office pmgmirror[7948]: starting cluster synchronization
Sep 23 11:32:43 pmg-office pmgmirror[7948]: cluster synchronization finished  (0 errors, 12.40 seconds (files 2.89, database 7.02, config 2.50))
Sep 23 11:34:30 pmg-office pmgmirror[7948]: starting cluster synchronization
Sep 23 11:34:41 pmg-office pmgmirror[7948]: cluster synchronization finished  (0 errors, 10.87 seconds (files 2.99, database 5.78, config 2.09))
Sep 23 11:36:30 pmg-office pmgmirror[7948]: starting cluster synchronization
Sep 23 11:36:41 pmg-office pmgmirror[7948]: cluster synchronization finished  (0 errors, 11.40 seconds (files 3.06, database 6.15, config 2.19))

But both original master and node seem not to synchronize correctly:

Code:
Sep 23 11:26:44 pmg pmgmirror[866]: cluster synchronization finished  (0 errors, 0.15 seconds (files 0.12, database 0.03, config 0.00))
Sep 23 11:28:44 pmg pmgmirror[866]: starting cluster synchronization
Sep 23 11:28:44 pmg pmgmirror[866]: database sync 'pmg-backup' failed - large time difference (> 7 seconds) - not syncing
Sep 23 11:28:44 pmg pmgmirror[866]: cluster synchronization finished  (1 errors, 0.01 seconds (files 0.00, database 0.01, config 0.00))
Sep 23 11:30:44 pmg pmgmirror[866]: starting cluster synchronization
Sep 23 11:30:45 pmg pmgmirror[866]: database sync 'pmg-office' failed - large time difference (> 7 seconds) - not syncing
Sep 23 11:30:45 pmg pmgmirror[866]: database sync 'pmg-backup' failed - large time difference (> 7 seconds) - not syncing
Sep 23 11:30:45 pmg pmgmirror[866]: cluster synchronization finished  (2 errors, 0.26 seconds (files 0.00, database 0.26, config 0.00))
Sep 23 11:32:44 pmg pmgmirror[866]: starting cluster synchronization
Sep 23 11:32:47 pmg pmgmirror[866]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                    DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Sep 23 11:32:48 pmg pmgmirror[866]: cluster synchronization finished  (1 errors, 3.51 seconds (files 1.70, database 1.81, config 0.00))
Sep 23 11:34:44 pmg pmgmirror[866]: starting cluster synchronization
Sep 23 11:34:47 pmg pmgmirror[866]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                    DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Sep 23 11:34:47 pmg pmgmirror[866]: cluster synchronization finished  (1 errors, 3.15 seconds (files 1.54, database 1.60, config 0.00))
Sep 23 11:36:44 pmg pmgmirror[866]: starting cluster synchronization
Sep 23 11:36:47 pmg pmgmirror[866]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                    DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.

Code:
Sep 23 11:28:59 pmg-backup pmgmirror[846]: starting cluster synchronization
Sep 23 11:28:59 pmg-backup pmgmirror[846]: database sync 'pmg' failed - large time difference (> 6 seconds) - not syncing
Sep 23 11:29:00 pmg-backup pmgmirror[846]: cluster synchronization finished  (1 errors, 0.71 seconds (files 0.00, database 0.46, config 0.25))
Sep 23 11:30:59 pmg-backup pmgmirror[846]: starting cluster synchronization
Sep 23 11:30:59 pmg-backup pmgmirror[846]: database sync 'pmg' failed - large time difference (> 6 seconds) - not syncing
Sep 23 11:30:59 pmg-backup pmgmirror[846]: database sync 'pmg-office' failed - DBI connect('dbname=Proxmox_ruledb;host=/run/pmgtunnel;port=3;','root',...) failed: could not connect to server: No such file or directory
                                                   Is the server running locally and accepting
                                                   connections on Unix domain socket "/run/pmgtunnel/.s.PGSQL.3"? at /usr/share/perl5/PMG/DBTools.pm line 66.
Sep 23 11:30:59 pmg-backup pmgmirror[846]: cluster synchronization finished  (2 errors, 0.73 seconds (files 0.00, database 0.43, config 0.30))
Sep 23 11:32:59 pmg-backup pmgmirror[846]: starting cluster synchronization
Sep 23 11:33:03 pmg-backup pmgmirror[846]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                           DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Sep 23 11:33:03 pmg-backup pmgmirror[846]: cluster synchronization finished  (1 errors, 3.94 seconds (files 1.64, database 2.07, config 0.23))
Sep 23 11:34:59 pmg-backup pmgmirror[846]: starting cluster synchronization
Sep 23 11:35:03 pmg-backup pmgmirror[846]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                           DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Sep 23 11:35:03 pmg-backup pmgmirror[846]: cluster synchronization finished  (1 errors, 4.08 seconds (files 1.59, database 2.26, config 0.23))

I am able to ssh from every node to every other node without providing a password.

And one more status result (which is the same on all nodes):
Code:
root@pmg-office:/# pmgcm status
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
pmg(1)               192.168.XXX.105 master S     1 day 02:19   0.02    62%    75%
pmg-office(3)        192.168.XX.150  node   A           16:28   0.39    89%    42%
pmg-backup(2)        192.168.XXX.100 node   S     1 day 02:20   0.01    51%    74%

I would be grateful if somebody could point me in the right direction how to successfully create the three node cluster.

Thanks,

Martin


EDIT:
Although both original nodes are stuck on syncing, dashboards on all three nodes look identical. At least some sort of synchronisation is taking place. However, syslogs on first two nodes show the same error.
 
Last edited:
database sync 'pmg' failed - large time difference (> 6 seconds) - not syncing
I think installing and configuring a ntp daemon should fix the issue
(unless you're running PMG as a LXC container I'd suggest chrony for this - if it's running as LXC container - install systemd-timesyncd (and check the time on the nodes where it's running)
I hope this helps!
 
Hello Stoiko,

thank you for the hint. But, I am afraid, that is not it. All nodes run as VMs and chrony is running on all of them.

The issue of not syncing error in log still persists. What I consider strange, is that only first two nodes (i.e. those that were originally running in a cluster without any issue) report this error, the third node (called pmg-office), which I added as the last one, doesn't report any error.

1st node - master:
Code:
Oct 15 11:01:15 pmg pmgmirror[868]: starting cluster synchronization
Oct 15 11:01:18 pmg pmgmirror[868]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                    DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:01:18 pmg pmgmirror[868]: cluster synchronization finished  (1 errors, 3.51 seconds (files 1.67, database 1.83, config 0.00))
Oct 15 11:03:15 pmg pmgmirror[868]: starting cluster synchronization
Oct 15 11:03:19 pmg pmgmirror[868]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                    DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:03:19 pmg pmgmirror[868]: cluster synchronization finished  (1 errors, 3.70 seconds (files 1.90, database 1.80, config 0.00))
Oct 15 11:05:15 pmg pmgmirror[868]: starting cluster synchronization
Oct 15 11:05:20 pmg pmgmirror[868]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                    DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:05:20 pmg pmgmirror[868]: cluster synchronization finished  (1 errors, 5.02 seconds (files 2.63, database 2.39, config 0.00))
Oct 15 11:07:15 pmg pmgmirror[868]: starting cluster synchronization
Oct 15 11:07:20 pmg pmgmirror[868]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                    DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:07:20 pmg pmgmirror[868]: cluster synchronization finished  (1 errors, 5.40 seconds (files 2.71, database 2.69, config 0.00))

2nd node:
Code:
Oct 15 11:00:08 pmg-backup pmgmirror[819]: starting cluster synchronization
Oct 15 11:00:12 pmg-backup pmgmirror[819]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                           DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:00:12 pmg-backup pmgmirror[819]: cluster synchronization finished  (1 errors, 4.31 seconds (files 1.69, database 2.41, config 0.21))
Oct 15 11:02:08 pmg-backup pmgmirror[819]: starting cluster synchronization
Oct 15 11:02:12 pmg-backup pmgmirror[819]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                           DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:02:12 pmg-backup pmgmirror[819]: cluster synchronization finished  (1 errors, 4.25 seconds (files 1.54, database 2.46, config 0.25))
Oct 15 11:04:08 pmg-backup pmgmirror[819]: starting cluster synchronization
Oct 15 11:04:13 pmg-backup pmgmirror[819]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                           DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:04:13 pmg-backup pmgmirror[819]: cluster synchronization finished  (1 errors, 5.25 seconds (files 2.17, database 2.84, config 0.25))
Oct 15 11:06:08 pmg-backup pmgmirror[819]: starting cluster synchronization
Oct 15 11:06:15 pmg-backup pmgmirror[819]: database sync 'pmg-office' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                           DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 11:06:15 pmg-backup pmgmirror[819]: cluster synchronization finished  (1 errors, 6.18 seconds (files 2.86, database 3.07, config 0.25))

And what puzzles me the most, is that all nodes including pmg-office seem to synchronize correctly. E.g. I can see the identical statistics on all three nodes despite the synchronisation error on two nodes.

Screen Shot 2021-10-15 at 11.17.01.png

Best regards,

Martin
 
database sync 'pmg-office' failed - DBD::pg::st execute failed: ERROR: duplicate key value violates unique constraint "cstatistic_pkey"

I think I've seen this a few times - mostly when a node was 'removed' and 'joined' again to the cluster by editing/removing /etc/pmg/cluster.conf manually

could you please post:
* `pmgcm status`
* /etc/pmg/cluster.conf
* `ls -la /var/spool/pmg/cluster/`
 
Hello Stoiko,

I did not remove and join a node to the cluster. First two nodes were part of the cluster since beginning and the problem started when I tried to add a third node.

pmgcm status:
Code:
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
pmg(1)               192.168.200.XXX master S    2 days 01:54   0.00    66%    85%
pmg-office(3)        192.168.99.XXX  node   A   18 days 00:49   0.01    59%    57%
pmg-backup(2)        192.168.200.XXX node   S    9 days 23:46   0.00    52%    87%

/etc/pmg/cluster.conf:
Code:
master: 1
        fingerprint 6A:AD:07:3B:33:7B:_____
        hostrsapubkey AAAAB3NzaC1yc2EAAAADAQABAAABAQC3vUA6SmwxTVO2jSa+dfr7Gb/H83Jko
        ip 192.168.200.XXX
        maxcid 3
        name pmg
        rootrsapubkey AAAAB3NzaC1yc2EAAAADAQABAAABAQDPOn0g7lrk6Oob5eCHclflNV2OEtJYm

node: 2
        fingerprint BE:45:5C:4B:6C:7D:______
        hostrsapubkey AAAAB3NzaC1yc2EAAAADAQABAAABgQC+XXNpOR0iiyJh1t5PAjQJOwwNhGlwG
        ip 192.168.200.XXX
        name pmg-backup
        rootrsapubkey AAAAB3NzaC1yc2EAAAADAQABAAABAQC6Tb7G1p1o4JYf6MxEEEBKr8RVVk/7/

node: 3
        fingerprint C9:78:38:9C:18:9C:______
        hostrsapubkey AAAAB3NzaC1yc2EAAAADAQABAAABgQDBGXOEboiNSfLzZl4PZNQnQTlHJ1bCG
        ip 192.168.99.XXX
        name pmg-office
        rootrsapubkey AAAAB3NzaC1yc2EAAAADAQABAAABAQDKOhwfXzhT5ASKgzfWd0sJ+kDxwtXa3S
I didn't feel like sharing every detail so I redacted the output. I hope this is sufficient. All keys were different...

ls -la /var/spool/pmg/cluster/:
Code:
total 20
drwxr-xr-x 5 root root 4096 May 14 20:37 .
drwxr-xr-x 7 root root 4096 May 14 20:37 ..
drwxr-xr-x 5 root root 4096 May  4 20:03 1
drwxr-xr-x 5 root root 4096 May 15 10:18 2
drwxr-xr-x 5 root root 4096 May 14 20:37 3
Frankly, I do not understand why the creation dates are so old. And "3" is older than "2".

Best regards,

Martin
 
E.g. I can see the identical statistics on all three nodes despite the synchronisation error on two nodes.
The statistics are always read from the master-node - so this does not mean that everything is syncing correctly.

How did you setup the third node?
(was it maybe a clone from one of the others?)

In any case I think the following procedure should work to get the cluster running smoothly - if not - please post the complete journal of the master node and the node you're trying to join (feel free to redact the information you don't want to share, similar to how you did already):

* run `pmgcm delete 3` on master
* if easily possible simply setup pmg-office new and fresh
* else remove /etc/pmg/cluster.conf on the pmg-office node and remove all directories in /var/spool/pmg/cluster/*
* join pmg-office newly to the cluster

I hope this helps!
 
I do not remember exactly how I set up the third node. It definitely wasn't cloned because it doesn't run on PVE.

Back to the problem:
- I deleted the third node on the master (pmgcm delete 3),
- I set up the third node from scratch,
- I waited until synchronization between master and remaining node finished - it ended up with synchronization error (the same one) so I deleted the second node on the master as well (pmgcm delete 2),
- I ended up with one node cluster,
- I tried to add the freshly installed node (former third node) as a new node to the cluster (pmgcm join-cmd on master and result pasted to the new node)
- this is the result:

Code:
root@pmg-office:~# pmgcm status
NAME(CID)--------------IPADDRESS----ROLE-STATE---------UPTIME---LOAD----MEM---DISK
pmg-office(4)        192.168.99.XXX  node   S           00:30   0.06    85%    36%
pmg(1)               192.168.200.XXX master A    2 days 05:29   0.00    65%    86%

Code:
Oct 15 15:31:59 pmg-office pmgmirror[1174]: starting server
Oct 15 15:31:59 pmg-office systemd[1]: Started Proxmox Mail Gateway Database Mirror Daemon.
Oct 15 15:34:00 pmg-office pmgmirror[1174]: starting cluster synchronization
Oct 15 15:34:04 pmg-office pmgmirror[1174]: syncing deleted node 3 from master '192.168.200.105'
Oct 15 15:34:08 pmg-office pmgmirror[1174]: database sync 'pmg' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
                                            DETAIL:  Key (cid, rid)=(3, 1472) already exists. at /usr/share/perl5/PMG/DBTools.pm line 1095.
Oct 15 15:34:11 pmg-office pmgmirror[1174]: cluster synchronization finished  (1 errors, 10.96 seconds (files 1.67, database 6.58, config 2.71))

What am I doing wrong?

BR,
Martin
 
Out of curiosity, I re-joined the former 2nd node using an alternative approach - remove /etc/pmg/cluster.conf on the pmg-backup node and remove all directories in /var/spool/pmg/cluster/*.

The master node is active but both (new) nodes are stuck on synchronization - both nodes are trying to synchronize deleted node from the master.
Code:
Oct 15 16:12:54 pmg-backup pmgmirror[1047]: starting cluster synchronization
Oct 15 16:12:54 pmg-backup pmgmirror[1047]: syncing deleted node 3 from master '192.168.200.105'
Oct 15 16:12:55 pmg-backup pmgmirror[1047]: database sync 'pmg' failed - DBD::Pg::st execute failed: ERROR:  duplicate key value violates unique constraint "cstatistic_pkey"
Oct 15 16:12:58 pmg-backup pmgmirror[1047]: cluster synchronization finished  (1 errors, 3.65 seconds (files 1.66, database 1.75, config 0.25))

Is this an intended behaviour or a bug? Why are new nodes trying to sync a deleted node from the master?
 
Hm - this is odd ...
Is the error always for
DETAIL: Key (cid, rid)=(3, 1472)
?

if yes could you please issue:
[code ]
psql Proxmox_ruledb -c 'select * from cstatistic where cid=3;'
[/code]

and share the output here (if it's not too large)

Is this an intended behaviour or a bug? Why are new nodes trying to sync a deleted node from the master?
This part is intended - if you remove a node - it's quarantine is made available on the master (and if you remove a master-node - the new master needs to take over - so quarantined mail is synchronized even for deleted nodes)
 
Yes, it is always the same error, same rid&cid.

Here is the redacted select:
Code:
 cid | rid  |  id  |    time    |  bytes   | direction | spamlevel | virusinfo | ptime |    
-----+------+------+------------+----------+-----------+-----------+-----------+-------+---------------------------------------------------------------------------------
   3 | 1472 | 1472 | 1621024910 |     3098 | f         |         0 |           |    59 | sim
   3 | 1473 | 1473 | 1621025067 |     4035 | f         |         0 |           |    58 | sim
   3 | 1474 | 1474 | 1621064004 |     6421 | t         |         0 |           |  3119 | mar
   3 | 1475 | 1475 | 1621064039 |     6403 | f         |         0 |           |    84 | sim
   3 | 1476 | 1476 | 1621065124 |     6408 | t         |         0 |           |  2055 | mar
   3 | 1477 | 1477 | 1621065520 |     6416 | f         |         0 |           |    49 | sim
   3 | 1478 | 1478 | 1621072805 |   100112 | t         |         1 |           |  3918 | bou
   3 | 1480 | 1480 | 1621076946 |     1686 | t         |         0 |           |  3260 | sim
   3 | 1482 | 1482 | 1621077114 |     6421 | t         |         0 |           |  1139 | mar
   3 | 1485 | 1485 | 1621126817 |  1579651 | t         |         0 |           |  3615 | mar
   3 | 1486 | 1486 | 1621126865 |    74448 | t         |         0 |           |  3679 | mar
   3 | 1487 | 1487 | 1621126922 |    73419 | t         |         0 |           |  1857 | mar
   3 | 1488 | 1488 | 1621126983 |    72050 | t         |         0 |           |  1886 | mar
   3 | 1489 | 1489 | 1621127042 |    55147 | t         |         0 |           |   397 | mar
   3 | 1490 | 1490 | 1621127102 |    81200 | t         |         0 |           |   749 | mar
   3 | 1491 | 1491 | 1621127411 |   468348 | t         |         0 |           |   881 | mar
   3 | 1492 | 1492 | 1621127463 |   330672 | t         |         0 |           |  1307 | mar
   3 | 1493 | 1493 | 1621127716 |  3532030 | t         |         0 |           |  1112 | mar
   3 | 1494 | 1494 | 1621130476 |     1793 | t         |         0 |           |  1220 | mar
   3 | 1496 | 1496 | 1621154200 |   116429 | t         |         1 |           |  3397 | bou
   3 | 1500 | 1500 | 1621206157 |    15463 | t         |        27 |           |  9005 | mel
   3 | 1501 | 1501 | 1621207003 |   117259 | t         |        12 |           |  1107 | ukx
   3 | 1503 | 1503 | 1621228027 |   114121 | t         |         1 |           |  3722 | bou
   3 | 1504 | 1504 | 1621245294 |        0 | t         |         4 |           |     0 | klu
   3 | 1506 | 1506 | 1621262234 |  1382313 | t         |         1 |           |  7014 | m.s
   3 | 1507 | 1507 | 1621271428 |    54607 | t         |         1 |           |  6863 | bou
   3 | 1508 | 1508 | 1621274331 |   112390 | t         |         2 |           |  4323 | ojd
   3 | 1509 | 1509 | 1621277137 |     1597 | t         |         0 |           |  2595 | mar
   3 | 1510 | 1510 | 1621314495 |   142472 | t         |         1 |           |  5211 | bou
   3 | 1511 | 1511 | 1621315681 |    43305 | t         |         0 |           |  3784 | 010
   3 | 1512 | 1512 | 1621315638 |        0 | t         |         4 |           |     0 | rid
   3 | 1513 | 1513 | 1621317157 |   125568 | t         |         0 |           |  3819 | pok
   3 | 1514 | 1514 | 1621321073 |        0 | t         |         4 |           |     0 | stu
   3 | 1515 | 1515 | 1621321795 |        0 | t         |         4 |           |     0 | new
   3 | 1516 | 1516 | 1621322858 |        0 | t         |         4 |           |     0 | quo
   3 | 1517 | 1517 | 1621330009 |        0 | t         |         4 |           |     0 | sie
   3 | 1518 | 1518 | 1621335736 |    71770 | t         |         0 |           |  4939 | bou
   3 | 1519 | 1519 | 1621337381 |        0 | t         |         4 |           |     0 | lay
   3 | 1520 | 1520 | 1621344397 |        0 | t         |         4 |           |     0 | flo
   3 | 1521 | 1521 | 1621347071 |    42270 | t         |         1 |           |  3869 | bou
   3 | 1522 | 1522 | 1621351128 |    78663 | t         |         0 |           |  5974 | bou
   3 | 1523 | 1523 | 1621358665 |    11614 | t         |        25 |           |  8748 | tay
   3 | 1524 | 1524 | 1621364317 |        0 | t         |         4 |           |     0 | cro
   3 | 1525 | 1525 | 1621366071 |        0 | t         |         4 |           |     0 | spi
   3 | 1526 | 1526 | 1621367226 |    12296 | t         |        29 |           |  8741 | orl
   3 | 1527 | 1527 | 1621367934 |        0 | t         |         4 |           |     0 | emp
   3 | 1528 | 1528 | 1621368331 |    46300 | t         |         0 |           |  5343 | bou
   3 | 1529 | 1529 | 1621369728 |        0 | t         |         4 |           |     0 | bli
   3 | 1530 | 1530 | 1621376912 |        0 | t         |         4 |           |     0 | lob
   3 | 1531 | 1531 | 1621378723 |        0 | t         |         4 |           |     0 | lob
   3 | 1532 | 1532 | 1621380704 |    97240 | t         |         0 |           |  5935 | bou
   3 | 1533 | 1533 | 1621382383 |        0 | t         |         4 |           |     0 | ste
   3 | 1534 | 1534 | 1621386736 |   142869 | t         |        10 |           |  3642 | ujk
   3 | 1535 | 1535 | 1621389651 |     1776 | t         |         0 |           |  1349 | mar
   3 | 1536 | 1536 | 1621389518 |        0 | t         |         4 |           |     0 | cot
   3 | 1537 | 1537 | 1621391318 |        0 | t         |         4 |           |     0 | mar
   3 | 1538 | 1538 | 1621393157 |        0 | t         |         4 |           |     0 | cro
   3 | 1539 | 1539 | 1621396682 |        0 | t         |         4 |           |     0 | spi
   3 | 1540 | 1540 | 1621400310 |        0 | t         |         4 |           |     0 | flo
   3 | 1541 | 1541 | 1621400888 |   120436 | t         |         1 |           |  4957 | bou
   3 | 1542 | 1542 | 1621403795 |        0 | t         |         4 |           |     0 | klu
   3 | 1543 | 1543 | 1621407520 |        0 | t         |         4 |           |     0 | bli
   3 | 1544 | 1544 | 1621409338 |        0 | t         |         4 |           |     0 | cav
   3 | 1545 | 1545 | 1621411083 |        0 | t         |         4 |           |     0 | swe
   3 | 1546 | 1546 | 1621413703 |        0 | t         |         4 |           |     0 | sie
   3 | 1547 | 1547 | 1621422961 |   131329 | t         |         1 |           |  8228 | bou
   3 | 1548 | 1548 | 1621427789 |    58075 | t         |         0 |           |  6361 | 010
   3 | 1549 | 1549 | 1621437022 |   159726 | t         |         1 |           |  4546 | bou
   3 | 1550 | 1550 | 1621438051 |    15660 | t         |         0 |           |  4380 | bou
   3 | 1551 | 1551 | 1621440467 |    45014 | t         |         1 |           |  3214 | bou
   3 | 1552 | 1552 | 1621458475 |   163867 | t         |         8 |           |  3747 | erd
   3 | 1553 | 1553 | 1621487295 |   108964 | t         |         1 |           |  4965 | bou
   3 | 1554 | 1554 | 1621489147 |     7977 | f         |         0 |           |   101 | sim
   3 | 1555 | 1555 | 1621489147 |     7939 | f         |         0 |           |    47 | sim
   3 | 1556 | 1556 | 1621489147 |     7924 | f         |         0 |           |    44 | sim
   3 | 1557 | 1557 | 1621497477 |    36723 | t         |         1 |           |  2918 | mag
   3 | 1558 | 1558 | 1621505668 |    83481 | t         |         0 |           |  6718 | bou
   3 | 1559 | 1559 | 1621519324 |    58352 | t         |         7 |           |  6272 | sui
   3 | 1560 | 1560 | 1621522674 |    61335 | t         |         1 |           |  3529 | bou
   3 | 1561 | 1561 | 1621524082 |    85369 | t         |         0 |           |  6004 | bou
   3 | 1562 | 1562 | 1621526195 |    89726 | t         |         0 |           |  4887 | msp
   3 | 1563 | 1563 | 1621531326 |    40355 | t         |         0 |           |  5443 | inf
   3 | 1564 | 1564 | 1621531930 |    31376 | t         |         0 |           |  3796 | bou
   3 | 1565 | 1565 | 1621541590 |    66085 | t         |         0 |           |  6310 | bou
   3 | 1566 | 1566 | 1621546278 |   113126 | t         |        12 |           |  4461 | yzt
   3 | 1567 | 1567 | 1621549438 |    35805 | t         |         0 |           |  3860 | bou
   3 | 1568 | 1568 | 1621569612 |     7939 | f         |         0 |           |    65 | sim
   3 | 1569 | 1569 | 1621569977 |    99874 | t         |         0 |           |  4002 | ema

I wasn’t able to paste the whole select, because code entry is limited to 15K character. So I pasted first hundred rows from around one thousand.

EDIT:
I ran that command on all three nodes and the outputs are identical on all nodes.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!