[SOLVED] corosync crash when adding a 15th node

ysyldur

Member
Dec 3, 2020
10
0
6
44
Before I had a cluster of 13 nodes. I added 3 other nodes and within 5 minutes I lost the whole cluster. After restarting corosync 1 by 1 but when I start a 15th node I have this message:
Code:
corosync[29232]:   [TOTEM ] Token has not been received in 380 ms

then after a few minutes the cluster crash:
Code:
Dec 29 19:11:16 xinfvirtrc23b corosync[29232]:   [TOTEM ] Token has not been received in 379 ms
Dec 29 19:11:18 xinfvirtrc23b corosync[29232]:   [TOTEM ] Retransmit List: 6 1b 5 8 d 10 16 17 19 1a 1c 1d 1e 24 25 27 28 29 2a 2e 2f 11 26 30 32 31 2b
Dec 29 19:11:26 xinfvirtrc23b corosync[29232]:   [TOTEM ] Token has not been received in 385 ms
Dec 29 19:11:28 xinfvirtrc23b corosync[29232]:   [TOTEM ] Retransmit List: 10 16 17 19 1a 1d 1e 25 28 2a 30 32 3a 36 39 3e 42 43
Dec 29 19:12:13 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 6 link: 0 is down
Dec 29 19:12:13 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 6 (passive) best link: 0 (pri: 1)
Dec 29 19:12:13 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 6 has no active links
Dec 29 19:12:18 xinfvirtrc23b corosync[29232]:   [KNET  ] rx: host: 6 link: 0 is up
Dec 29 19:12:18 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 6 (passive) best link: 0 (pri: 1)
Dec 29 19:12:39 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 5 link: 0 is down
Dec 29 19:12:39 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 5 (passive) best link: 0 (pri: 1)
Dec 29 19:12:39 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 5 has no active links
Dec 29 19:12:42 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 2 link: 0 is down
Dec 29 19:12:42 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Dec 29 19:12:42 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 2 has no active links
Dec 29 19:12:44 xinfvirtrc23b corosync[29232]:   [KNET  ] rx: host: 5 link: 0 is up
Dec 29 19:12:44 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 5 (passive) best link: 0 (pri: 1)
Dec 29 19:12:50 xinfvirtrc23b pvedaemon[47715]: <root@pam> successful auth for user 'root@pam'
Dec 29 19:13:02 xinfvirtrc23b corosync[29232]:   [TOTEM ] Token has not been received in 8028 ms
Dec 29 19:14:00 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 1 link: 0 is down
Dec 29 19:14:00 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Dec 29 19:14:00 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 1 has no active links
Dec 29 19:14:12 xinfvirtrc23b corosync[29232]:   [TOTEM ] Token has not been received in 388 ms
Dec 29 19:14:27 xinfvirtrc23b corosync[29232]:   [TOTEM ] Token has not been received in 389 ms
Dec 29 19:14:39 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 7 link: 0 is down
Dec 29 19:14:39 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 7 (passive) best link: 0 (pri: 1)
Dec 29 19:14:39 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 7 has no active links
Dec 29 19:14:42 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 8 link: 0 is down
Dec 29 19:14:42 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 8 (passive) best link: 0 (pri: 1)
Dec 29 19:14:42 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 8 has no active links
Dec 29 19:14:44 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 9 link: 0 is down
Dec 29 19:14:44 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 9 (passive) best link: 0 (pri: 1)
Dec 29 19:14:44 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 9 has no active links
Dec 29 19:14:47 xinfvirtrc23b corosync[29232]:   [KNET  ] link: host: 10 link: 0 is down
Dec 29 19:14:47 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 10 (passive) best link: 0 (pri: 1)
Dec 29 19:14:47 xinfvirtrc23b corosync[29232]:   [KNET  ] host: host: 10 has no active links
Dec 29 19:14:47 xinfvirtrc23b corosync[29232]:   [TOTEM ] Token has not been received in 12683 ms
Dec 29 19:14:54 xinfvirtrc23b sshd[24883]: Accepted publickey for root from 10.201.12.52 port 56162 ssh2: RSA SHA256:NlHJccEIhs53WfwenX3bfoMBj/+KnyePlOdMxGkATEE
Dec 29 19:14:54 xinfvirtrc23b sshd[24883]: pam_unix(sshd:session): session opened for user root by (uid=0)
Dec 29 19:14:54 xinfvirtrc23b systemd-logind[2279]: New session 492 of user root.
Dec 29 19:14:54 xinfvirtrc23b systemd[1]: Started Session 492 of user root.
Dec 29 19:14:54 xinfvirtrc23b systemd[1]: Stopping Corosync Cluster Engine...
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [MAIN  ] Node was shut down by a signal
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Unloading all Corosync service engines.
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [QB    ] withdrawing server sockets
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync vote quorum service v1.0
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [confdb] crit: cmap_dispatch failed: 2
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [QB    ] withdrawing server sockets
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync configuration map access
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [QB    ] withdrawing server sockets
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync configuration service
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [status] crit: cpg_dispatch failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [status] crit: cpg_leave failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: cpg_dispatch failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: cpg_leave failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: cpg_send_message failed: 9
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: cpg_send_message failed: 9
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: cpg_send_message failed: 9
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: cpg_send_message failed: 9
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [QB    ] withdrawing server sockets
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync cluster closed process group service v1.01
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [quorum] crit: quorum_dispatch failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [status] notice: node lost quorum
Dec 29 19:14:54 xinfvirtrc23b pve-ha-lrm[4024]: unable to write lrm status file - unable to open file '/etc/pve/nodes/xinfvirtrc23b/lrm_status.tmp.4024' - Device or resource busy
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [QB    ] withdrawing server sockets
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync cluster quorum service v0.1
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync profile loading service
Dec 29 19:14:54 xinfvirtrc23b pvesr[15994]: trying to acquire cfs lock 'file-replication_cfg' ...
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync resource monitoring service
Dec 29 19:14:54 xinfvirtrc23b corosync[29232]:   [SERV  ] Service engine unloaded: corosync watchdog service
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [quorum] crit: quorum_initialize failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [quorum] crit: can't initialize service
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [confdb] crit: cmap_initialize failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [confdb] crit: can't initialize service
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] notice: start cluster connection
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: cpg_initialize failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [dcdb] crit: can't initialize service
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [status] notice: start cluster connection
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [status] crit: cpg_initialize failed: 2
Dec 29 19:14:54 xinfvirtrc23b pmxcfs[43531]: [status] crit: can't initialize service
Dec 29 19:14:55 xinfvirtrc23b corosync[29232]:   [MAIN  ] Corosync Cluster Engine exiting normally

proxmox 6.3
kernel: 5.4.78-2-pve
Corosync 3.0.4
libknet1/stable,now 1.16-pve1 amd64

Code:
Cluster information
-------------------
Name:             kubeRCbess
Config Version:   42
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Dec 29 20:04:02 2020
Quorum provider:  corosync_votequorum
Nodes:            13
Node ID:          0x00000010
Ring ID:          1.70ce
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   16
Highest expected: 16
Total votes:      13
Quorum:           9 
Flags:            Quorate

Membership information
----------------------
 
Last edited:
Hi ysyldur,

Can you tell us a little bit more about your corosync network connection? (physical and virtual situation / interfaces, switches, connection speed, ect.)

We had similar messanges like: corosync[29232]: [TOTEM ] Token has not been received in 380 ms
on a smaller cluster. Our Problem was based on the LACP connection for the Corosync Network. Now we are working with two redundand rings and had no problems for weeks.

Have you tryed to omping all cluster members before adding the 15th node?

start omping on all nodes with the following command and check the output, e.g: this is the precise version it sends 10000 packets in a interval of 1ms

omping -c 10000 -i 0.001 -F -q node1 node2 node3

If the latency is too high > 6-7ms Corosync may not work propperly.

Add1: Concening the redundand links, the wiki says:


Redundand Ring Protocol​
To be safe when the switch used for corosync fails, also to get faster throughput on the cluster communication - which may be helpful on big setups with a lot of nodes - you can use redundant rings. Those rings must run on two physical separated network, else you won't gain any plus on the High Availability side.​
To use it first configure another interface and hostnames for your second ring like described above.​

from: https://pve.proxmox.com/wiki/Separate_Cluster_Network#Redundant_Ring_Protocol (This is an old article and you will now find this information in the documentation: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_redundancy)
 
Last edited:
Hi,
thanks for your reply, my corosync connection:
the same interfaces that vm storage/gui, only 1 link in 1G.

Code:
cat /etc/network/interfaces:

auto vmbr0
iface vmbr0 inet static
    address 10.123.1.83/24
    gateway 10.123.1.254
    bridge-ports eno1.470
    bridge-stp off
    bridge-fd 0


allow-vmbr1 eno1
iface eno1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1

allow-ovs vmbr1
iface vmbr1 inet manual
    ovs_type OVSBridge
    ovs_ports eno1


After read the doc, I try to seperate corosync and network for vm/gui... by adding a new connection

but I was wondering if I could stabilize by adjusting the knet and token parameter?
 

Attachments

I think one single link for all traffic is not the bettest way of doing it.
You can, and obusly now ran into problems, if the cluster becomes bigger. If all nodes talk to each other the latency becomes too high, and fencing starts.
Also if it worked with 14 nodes, in situation with high network load (backups, migration, ect.) the latency could be a problem.

In this thread you can find a litte bit more about changeing corosync's window size, but i do not really recomand this.

https://forum.proxmox.com/threads/howto-fix-corosync-totem-retransmit-list-errors.23795/

The best way would be the seperation if this is possible.
 
I haven't made any configuration changes yet, just activated debug mode and reproduced the problem.
addition of host 4
But in the log I do not see any indication that could help me with the issue.
 

Attachments

omping don't respond
Code:
root@xinfvirtrc15b:~# omping -c 10000 -i 0.001 -F -q 10.123.1.132 10.123.1.188 10.123.1.179
10.123.1.188 : waiting for response msg
10.123.1.179 : waiting for response msg
10.123.1.188 : waiting for response msg
10.123.1.179 : waiting for response msg
10.123.1.188 : waiting for response msg
10.123.1.179 : waiting for response msg
^C
10.123.1.188 : response message never received
10.123.1.179 : response message never received

On switchs igmp snooping is disabled

I do omping only few node

But I don't understand why to test multicast, normally corosync 3 and knet works in unicast or I didn't understand how kronosnet works?
 
You have to install omping on all the machines, you want to test.
Then you have to fire up
omping -c 10000 -i 0.001 -F -q node1 node2 node3 nodeX​
on all the nodes listed (node1, node2, node3, nodeX) on the same time, otherwise there could not be any response.
In your case start it (best at the same time, via ssh) on 10.123.1.132 and 10.123.1.188 and 10.123.1.179. If you start it one after another through the gui (some seconds in between) there may be some packages which will be lost. (but that is not a problem)
You are totally right, when saying that corosync works at this version with unicast. You will also get an unicast result from omping.
If you are testing with this procedure you can simulate what is going on in the normal case. All your cluster nodes are fireing there signals.

E.g. results of a three node cluster

192.168.252.49 : unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.012/0.045/0.352/0.023

192.168.252.49 : multicast, xmt/rcv/%loss = 10000/9999/0% (seq>=2 0%), min/avg/max/std-dev = 0.014/0.049/0.395/0.024

192.168.252.50 : unicast, xmt/rcv/%loss = 10000/9995/0%, min/avg/max/std-dev = 0.011/0.045/0.358/0.029

192.168.252.50 : multicast, xmt/rcv/%loss = 10000/9994/0% (seq>=2 0%), min/avg/max/std-dev = 0.014/0.049/0.367/0.029

here you can see the min/avg/max latency in unicast.

If there are many nodes in contact with each other latancy may become higher and corosync comes into trouble. As written latancy should be under 6-7ms when all nodes are pinging.

This is a short term test. If you are running
omping -c 600 -i 1 -q node1 node2 node3 nodeX
you get a result over a period of about 10 minutes. Then you will also see the impact of the other traffic runing over your interface.

Both should work with a latency unter 6-7ms to ensure that corosync can work properly from the network side. (look at max)
 
ok, thanks for the answer i didn't understand how omping works.

First only 3 nodes
Code:
omping -c 10000 -i 0.001 -F -q 10.123.1.179 10.123.1.132 10.123.1.188
10.123.1.132 : waiting for response msg
10.123.1.188 : waiting for response msg
10.123.1.132 : waiting for response msg
10.123.1.188 : waiting for response msg
10.123.1.188 : joined (S,G) = (*, 232.43.211.234), pinging
10.123.1.132 : waiting for response msg
10.123.1.132 : waiting for response msg
10.123.1.132 : joined (S,G) = (*, 232.43.211.234), pinging
10.123.1.188 : given amount of query messages was sent
10.123.1.132 : waiting for response msg
10.123.1.132 : server told us to stop

10.123.1.132 :   unicast, xmt/rcv/%loss = 9529/9529/0%, min/avg/max/std-dev = 0.073/0.130/0.637/0.024
10.123.1.132 : multicast, xmt/rcv/%loss = 9529/9529/0%, min/avg/max/std-dev = 0.078/0.130/0.637/0.025
10.123.1.188 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.072/0.132/7.539/0.145
10.123.1.188 : multicast, xmt/rcv/%loss = 10000/9994/0%, min/avg/max/std-dev = 0.078/0.135/8.453/0.203

now I will do the test on all nodes
 
Last edited:
Beware if you are already experiencing issues, the steps taken to diagnose the problem may make the problem worse in the short term!
You have already a high latency to the node 10.123.1.188
10.123.1.188 : unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.072/0.132/7.539/0.145
How does the results on the other nodes look like? (If you test on 3 nodes, there should be an output on all three nodes)
 
Hi
the output of all nodes:
Code:
10.123.1.188 :   unicast, xmt/rcv/%loss = 9014/9005/0%, min/avg/max/std-dev = 0.067/0.176/22.110/0.755
10.123.1.188 : multicast, xmt/rcv/%loss = 9014/7584/15% (seq>=1471 0%), min/avg/max/std-dev = 0.068/18.251/1715.497/173.905
10.123.1.179 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.052/0.150/1.631/0.085
10.123.1.179 : multicast, xmt/rcv/%loss = 10000/9940/0%, min/avg/max/std-dev = 0.059/0.659/265.926/9.602
10.123.1.184 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.056/0.143/2.843/0.072
10.123.1.184 : multicast, xmt/rcv/%loss = 10000/9935/0%, min/avg/max/std-dev = 0.056/0.591/250.548/8.838
10.123.1.129 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.071/0.163/0.957/0.075
10.123.1.129 : multicast, xmt/rcv/%loss = 10000/9496/5% (seq>=474 0%), min/avg/max/std-dev = 0.074/4.764/1746.413/70.259
10.123.1.131 :   unicast, xmt/rcv/%loss = 9176/9176/0%, min/avg/max/std-dev = 0.061/0.154/1.013/0.062
10.123.1.131 : multicast, xmt/rcv/%loss = 9176/8349/9% (seq>=762 0%), min/avg/max/std-dev = 0.061/7.541/1712.769/111.107
10.123.1.132 :   unicast, xmt/rcv/%loss = 9183/9183/0%, min/avg/max/std-dev = 0.061/0.162/2.052/0.095
10.123.1.132 : multicast, xmt/rcv/%loss = 9183/9097/0%, min/avg/max/std-dev = 0.061/0.163/2.058/0.096
10.123.1.165 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.064/0.154/0.825/0.057
10.123.1.165 : multicast, xmt/rcv/%loss = 10000/9351/6% (seq>=603 0%), min/avg/max/std-dev = 0.064/5.161/1771.724/75.038
10.123.1.114 :   unicast, xmt/rcv/%loss = 9179/9172/0%, min/avg/max/std-dev = 0.060/0.192/17.492/0.586
10.123.1.114 : multicast, xmt/rcv/%loss = 9179/9058/1%, min/avg/max/std-dev = 0.064/0.193/17.496/0.591
10.123.1.120 :   unicast, xmt/rcv/%loss = 9016/9016/0%, min/avg/max/std-dev = 0.068/0.142/1.137/0.059
10.123.1.120 : multicast, xmt/rcv/%loss = 9016/8906/1%, min/avg/max/std-dev = 0.072/0.147/1.170/0.060
10.123.1.56  :   unicast, xmt/rcv/%loss = 10000/9978/0%, min/avg/max/std-dev = 0.049/0.286/5.406/0.784
10.123.1.56  : multicast, xmt/rcv/%loss = 10000/9712/2% (seq>=258 0%), min/avg/max/std-dev = 0.051/4.335/1697.956/59.408
10.123.1.64  :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.045/0.175/4.698/0.441
10.123.1.64  : multicast, xmt/rcv/%loss = 10000/9670/3% (seq>=327 0%), min/avg/max/std-dev = 0.049/4.324/1746.606/62.300
10.123.1.67  :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.051/0.137/4.116/0.146
10.123.1.67  : multicast, xmt/rcv/%loss = 10000/9485/5% (seq>=21 4%), min/avg/max/std-dev = 0.049/3.681/1698.377/58.861
10.123.1.92  :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.057/0.150/3.278/0.084
10.123.1.92  : multicast, xmt/rcv/%loss = 10000/9433/5% (seq>=477 0%), min/avg/max/std-dev = 0.063/4.736/1752.450/72.310
10.123.1.80  :   unicast, xmt/rcv/%loss = 8542/8542/0%, min/avg/max/std-dev = 0.058/0.146/0.587/0.055
10.123.1.80  : multicast, xmt/rcv/%loss = 8542/6599/22% (seq>=1947 0%), min/avg/max/std-dev = 0.063/32.262/1986.588/230.461
10.123.1.83  :   unicast, xmt/rcv/%loss = 10000/9999/0%, min/avg/max/std-dev = 0.060/0.158/6.594/0.165
10.123.1.83  : multicast, xmt/rcv/%loss = 10000/8340/16% (seq>=1577 0%), min/avg/max/std-dev = 0.064/16.912/2485.404/172.029
10.123.1.82  :   unicast, xmt/rcv/%loss = 9181/9181/0%, min/avg/max/std-dev = 0.059/0.136/0.852/0.056
10.123.1.82  : multicast, xmt/rcv/%loss = 9181/9042/1%, min/avg/max/std-dev = 0.062/0.140/0.782/0.057


indeed in multicast we observe high latency times but not in unicast.
 
10.123.1.188 : unicast, xmt/rcv/%loss = 9014/9005/0%, min/avg/max/std-dev = 0.067/0.176/22.110/0.755
10.123.1.114 : unicast, xmt/rcv/%loss = 9179/9172/0%, min/avg/max/std-dev = 0.060/0.192/17.492/0.586
10.123.1.83 : unicast, xmt/rcv/%loss = 10000/9999/0%, min/avg/max/std-dev = 0.060/0.158/6.594/0.165

Maybe also that three, where you no not get a valide vote from?

Votequorum information
----------------------
Expected votes: 16
Highest expected: 16
Total votes: 13
Quorum: 9
Flags: Quorate

Membership information
 
Have you checked according the Requirements:
  • All nodes must be able to connect to each other via UDP ports 5404 and 5405 for corosync to work. (Firewall?)
  • Date and time have to be synchronized. (Very Essential, that the cluster could run!)
  • SSH tunnel on TCP port 22 between nodes is used.
  • If you are interested in High Availability, you need to have at least three nodes for reliable quorum. All nodes should have the same version.
  • We recommend a dedicated NIC for the cluster traffic, especially if you use shared storage.
  • Root password of a cluster node is required for adding nodes.
https://pve.proxmox.com/wiki/Cluster_Manager
 
all nodes are the same version 6.3.3 kern: 5.4.78-2-pve
ssh ok between all nodes
date synchronise with time-sync and I checked the date

In another data center I have the exact same problem. The latency is a little better I can have 15 nodes out of 16.
But as soon as I try to add a 16 here is what happens:
the member joins the cluster
Code:
Jan 08 10:35:23 xinfvirtrc02u corosync[26118]:   [TOTEM ] A new membership (1.2cd0) was formed. Members joined: 9
Jan 08 10:35:23 xinfvirtrc02u pmxcfs[6376]: [dcdb] notice: members: 1/1746, 2/1795, 3/1764, 4/1786, 5/1751, 6/2555, 7/1754, 8/1672, 9/1766, 10/1713, 11/4036, 12/3947, 13/8290, 14/17597, 15/24807, 16/6376
Jan 08 10:35:23 xinfvirtrc02u pmxcfs[6376]: [dcdb] notice: starting data syncronisation
Jan 08 10:35:23 xinfvirtrc02u pmxcfs[6376]: [status] notice: members: 1/1746, 2/1795, 3/1764, 4/1786, 5/1751, 6/2555, 7/1754, 8/1672, 9/1766, 10/1713, 11/4036, 12/3947, 13/8290, 14/17597, 15/24807, 16/6376
Jan 08 10:35:23 xinfvirtrc02u pmxcfs[6376]: [status] notice: starting data syncronisation
Jan 08 10:35:23 xinfvirtrc02u corosync[26118]:   [QUORUM] Members[16]: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Jan 08 10:35:23 xinfvirtrc02u corosync[26118]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jan 08 10:35:23 xinfvirtrc02u pmxcfs[6376]: [dcdb] notice: received sync request (epoch 1/1746/000001CA)
Jan 08 10:35:23 xinfvirtrc02u pmxcfs[6376]: [status] notice: received sync request (epoch 1/1746/000001C9)

Then the node remains gray with "?" in the GUI and in the logs:
Code:
Jan 08 10:35:46 xinfvirtrc02u corosync[26118]:   [TOTEM ] Token has not been received in 363 ms
Jan 08 10:36:00 xinfvirtrc02u systemd[1]: Starting Proxmox VE replication runner...
Jan 08 10:36:01 xinfvirtrc02u corosync[26118]:   [TOTEM ] Token has not been received in 364 ms
Jan 08 10:36:15 xinfvirtrc02u corosync[26118]:   [TOTEM ] Token has not been received in 363 ms
Jan 08 10:36:45 xinfvirtrc02u corosync[26118]:   [TOTEM ] Token has not been received in 363 ms

like this the cluster could not generate a new token.
I tried with different nodes of the cluster but it is the same result. It is mpossible to have the 16 nodes at the same time. I have to stop corosync on one of the nodes.

I tried different corosync configurations:
send_join: 80
increase decrease max_messages
decrease window_size
increase token

And I tried protocol sctp but is not working maybe our switchs doesn't suppot


I also tuned UDP in sysctl.conf:
Code:
net.core.somaxconn = 8192
net.core.netdev_max_backlog = 262144
net.core.rmem_default = 31457280
net.core.wmem_default = 31457280
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 65535
net.core.optmem_max = 25165824
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384

but it is the same result. At the moment I cannot add a network link for corsync.
 
Last edited:
I fixed my problem with this parameter in corosync:
token_retransmit: 200

on first DC 17/17 nodes and senconde DC 16/16 nodes

Thanks for your help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!