Corosync cluster problems (Help)

bferrell

Member
Nov 16, 2018
42
1
8
50
I need some help pretty badly. I was following along with this excellent write-up to create a second network for my corosyncing, and I thought it was going pretty well, but now I have all 4 of my nodes completely isolated somehow. I've attached the old and new corosync.conf files, and I've reverted all 4 to the original, but on node1 the corosync service will not start. The other 3 have it running, but don't form a quorum for some reason.

I changed the corosynf.conf file and rebooted one node. All looked well, so I rebooted a second, it paired up with the first, great, I rebooted the third, and it formed a quorum... but when I rebooted the last it would not join (node 1) it would not join the cluster. So, I tried to copy the old corosync.conf back to revert, but node1 didn't get the update, so I stopped corosync as described here and reverted it, when it all went to hell.

What to do next (without making it worse)?



Original corosync.conf (reverted to currently)
Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: svr-01
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.100.11
  }
  node {
    name: svr-02
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.100.12
  }
  node {
    name: svr-03
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.100.13
  }
  node {
    name: svr-04
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 192.168.100.14
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: Congress
  config_version: 6
  interface {
    bindnetaddr: 192.168.100.11
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}
New/separated corosync.conf
Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: svr-01
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 1corosync
  }
  node {
    name: svr-02
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 2corosync
  }
  node {
    name: svr-03
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 3corosync
  }
  node {
    name: svr-04
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 4corosync
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: Congress
  config_version: 7
  interface {
    bindnetaddr: 192.168.102.11
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}
Hosts File
Code:
127.0.0.1 localhost.localdomain localhost
192.168.100.11 svr-01.bdfserver.com svr-01
192.168.100.12 svr-02.bdfserver.com svr-02
192.168.100.13 svr-03.bdfserver.com svr-03
192.168.100.14 svr-04.bdfserver.com svr-04 pvelocalhost

# corosync network hosts
192.168.102.11 1corosync.bdfserver.com 1corosync
192.168.102.12 2corosync.bdfserver.com 2corosync
192.168.102.13 3corosync.bdfserver.com 3corosync
192.168.102.14 4corosync.bdfserver.com 4corosync

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
 

bferrell

Member
Nov 16, 2018
42
1
8
50
OK, I did get the other 3 back... even though I reverted corosync.conf, they were still trying to use the new interface (which I had unplugged to try to simplify the situation).
 

bferrell

Member
Nov 16, 2018
42
1
8
50
I can't seem to get the web interface to come up now on node 1. Both interfaces are up, and I can ping both.

Code:
PING 192.168.100.11 (192.168.100.11): 56 data bytes
64 bytes from 192.168.100.11: icmp_seq=0 ttl=63 time=0.476 ms
64 bytes from 192.168.100.11: icmp_seq=1 ttl=63 time=0.398 ms

--- 192.168.100.11 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.398/0.437/0.476/0.039 ms
Brett-5k-iMac:~ bferrell$ ping -c 2 192.168.102.11
PING 192.168.102.11 (192.168.102.11): 56 data bytes
64 bytes from 192.168.102.11: icmp_seq=0 ttl=63 time=0.596 ms
64 bytes from 192.168.102.11: icmp_seq=1 ttl=63 time=0.302 ms

--- 192.168.102.11 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.302/0.449/0.596/0.147 ms
Code:
root@svr-01:~# ifconfig
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.102.11  netmask 255.255.255.0  broadcast 192.168.102.255
        inet6 fe80::baca:3aff:fef5:73b0  prefixlen 64  scopeid 0x20<link>
        ether b8:ca:3a:f5:73:b0  txqueuelen 1000  (Ethernet)
        RX packets 23436  bytes 10772642 (10.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 169  bytes 45816 (44.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 53

enp65s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:0a:f7:58:53:32  txqueuelen 1000  (Ethernet)
        RX packets 40041  bytes 8872645 (8.4 MiB)
        RX errors 0  dropped 65  overruns 0  frame 0
        TX packets 2402  bytes 212077 (207.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 69  memory 0xd1000000-d17fffff

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1462  bytes 119714 (116.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1462  bytes 119714 (116.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.100.11  netmask 255.255.255.0  broadcast 192.168.100.255
        inet6 fe80::20a:f7ff:fe58:5332  prefixlen 64  scopeid 0x20<link>
        ether 00:0a:f7:58:53:32  txqueuelen 1000  (Ethernet)
        RX packets 40031  bytes 8154237 (7.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2407  bytes 203079 (198.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 

bferrell

Member
Nov 16, 2018
42
1
8
50
I see that on node1 for some reason my /etc/pve directory is empty. I'm trying to follow this advice to get my corosync working, as all of the node1 config seems to have been lost.
 

bferrell

Member
Nov 16, 2018
42
1
8
50
Ok, following that article, I have node1 back up and can get into the web UI, but it's still isolated... so what's the easy way to get it back into the quorum? It's running the same corosync file as the other 3... I can ssh into the other nodes from node1, and it can ping the corosync NIC on the other nodes.

This is the syslog from node 1 (abbreviated)
Code:
ul 11 21:13:07 svr-01 pvesr[5941]: trying to acquire cfs lock 'file-replication_cfg' ...
Jul 11 21:13:08 svr-01 pvesr[5941]: trying to acquire cfs lock 'file-replication_cfg' ...
Jul 11 21:13:09 svr-01 pvesr[5941]: trying to acquire cfs lock 'file-replication_cfg' ...
Jul 11 21:13:10 svr-01 pvesr[5941]: error with cfs lock 'file-replication_cfg': no quorum!
Jul 11 21:13:10 svr-01 systemd[1]: pvesr.service: Main process exited, code=exited, status=13/n/a
Jul 11 21:13:10 svr-01 systemd[1]: Failed to start Proxmox VE replication runner.
Jul 11 21:13:10 svr-01 systemd[1]: pvesr.service: Unit entered failed state.
Jul 11 21:13:10 svr-01 systemd[1]: pvesr.service: Failed with result 'exit-code'.
Jul 11 21:13:12 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Jul 11 21:13:12 svr-01 pmxcfs[1980]: [confdb] crit: cmap_initialize failed: 2
Jul 11 21:13:12 svr-01 pmxcfs[1980]: [dcdb] crit: cpg_initialize failed: 2
Jul 11 21:13:12 svr-01 pmxcfs[1980]: [status] crit: cpg_initialize failed: 2
Jul 11 21:13:18 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Jul 11 21:13:18 svr-01 pmxcfs[1980]: [confdb] crit: cmap_initialize failed: 2
Jul 11 21:13:18 svr-01 pmxcfs[1980]: [dcdb] crit: cpg_initialize failed: 2
Jul 11 21:13:18 svr-01 pmxcfs[1980]: [status] crit: cpg_initialize failed: 2
Jul 11 21:13:24 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Jul 11 21:13:24 svr-01 pmxcfs[1980]: [confdb] crit: cmap_initialize failed: 2
Jul 11 21:13:24 svr-01 pmxcfs[1980]: [dcdb] crit: cpg_initialize failed: 2
Jul 11 21:13:24 svr-01 pmxcfs[1980]: [status] crit: cpg_initialize failed: 2
Jul 11 21:13:30 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Jul 11 21:13:30 svr-01 pmxcfs[1980]: [confdb] crit: cmap_initialize failed: 2
Jul 11 21:13:30 svr-01 pmxcfs[1980]: [dcdb] crit: cpg_initialize failed: 2
Jul 11 21:13:30 svr-01 pmxcfs[1980]: [status] crit: cpg_initialize failed: 2
Jul 11 21:13:36 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Jul 11 21:13:36 svr-01 pmxcfs[1980]: [confdb] crit: cmap_initialize failed: 2
Jul 11 21:13:36 svr-01 pmxcfs[1980]: [dcdb] crit: cpg_initialize failed: 2
Jul 11 21:13:36 svr-01 pmxcfs[1980]: [status] crit: cpg_initialize failed: 2
Jul 11 21:13:42 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Jul 11 21:13:42 svr-01 pmxcfs[1980]: [confdb] crit: cmap_initialize failed: 2
Jul 11 21:13:42 svr-01 pmxcfs[1980]: [dcdb] crit: cpg_initialize failed: 2
Jul 11 21:13:42 svr-01 pmxcfs[1980]: [status] crit: cpg_initialize failed: 2
Jul 11 21:13:48 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Jul 11 21:13:48 svr-01 pmxcfs[1980]: [confdb] crit: cmap_initialize failed: 2
Jul 11 21:13:48 svr-01 pmxcfs[1980]: [dcdb] crit: cpg_initialize failed: 2
Jul 11 21:13:48 svr-01 pmxcfs[1980]: [status] crit: cpg_initialize failed: 2
Jul 11 21:13:54 svr-01 pmxcfs[1980]: [quorum] crit: quorum_initialize failed: 2
Status
Code:
root@svr-01:~# service corosync status
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Thu 2019-07-11 21:28:00 EDT; 49min ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
  Process: 2076 ExecStart=/usr/sbin/corosync -f $COROSYNC_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2076 (code=exited, status=0/SUCCESS)
      CPU: 223ms

Jul 11 21:28:00 svr-01 corosync[2076]:  [QB    ] withdrawing server sockets
Jul 11 21:28:00 svr-01 corosync[2076]:  [SERV  ] Service engine unloaded: corosync configuration serv
Jul 11 21:28:00 svr-01 corosync[2076]: info    [QB    ] withdrawing server sockets
Jul 11 21:28:00 svr-01 corosync[2076]: notice  [SERV  ] Service engine unloaded: corosync cluster clo
Jul 11 21:28:00 svr-01 corosync[2076]: info    [QB    ] withdrawing server sockets
Jul 11 21:28:00 svr-01 corosync[2076]: notice  [SERV  ] Service engine unloaded: corosync cluster quo
Jul 11 21:28:00 svr-01 corosync[2076]: notice  [SERV  ] Service engine unloaded: corosync profile loa
Jul 11 21:28:00 svr-01 corosync[2076]: notice  [SERV  ] Service engine unloaded: corosync resource mo
Jul 11 21:28:00 svr-01 corosync[2076]: notice  [SERV  ] Service engine unloaded: corosync watchdog se
Jul 11 21:28:00 svr-01 corosync[2076]: notice  [MAIN  ] Corosync Cluster Engine exiting normally
Service Status
Code:
root@svr-01:~# ps aux | grep corosync
root     10783  0.0  0.0  39596  4672 pts/0    T    22:17   0:00 systemctl --job-mode=ignore-dependencies status corosync.service
root     15102  0.0  0.0  12784   936 pts/0    S+   22:42   0:00 grep corosync
CoroSync
Code:
root@svr-01:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: svr-01
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.100.11
  }
  node {
    name: svr-02
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.100.12
  }
  node {
    name: svr-03
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.100.13
  }
  node {
    name: svr-04
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 192.168.100.14
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: Congress
  config_version: 6
  interface {
    bindnetaddr: 192.168.100.11
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}
Hosts
Code:
root@svr-01:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.100.11 svr-01.bdfserver.com svr-01 pvelocalhost
192.168.100.12 svr-02.bdfserver.com svr-02
192.168.100.13 svr-03.bdfserver.com svr-03
192.168.100.14 svr-04.bdfserver.com svr-04

# corosync network hosts
192.168.102.11 1corosync.bdfserver.com 1corosync
192.168.102.12 2corosync.bdfserver.com 2corosync
192.168.102.13 3corosync.bdfserver.com 3corosync
192.168.102.14 4corosync.bdfserver.com 4corosync

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Ping
Code:
root@svr-01:~# ping -c 2 svr-02
PING svr-02.bdfserver.com (192.168.100.12) 56(84) bytes of data.
64 bytes from svr-02.bdfserver.com (192.168.100.12): icmp_seq=1 ttl=64 time=0.199 ms
64 bytes from svr-02.bdfserver.com (192.168.100.12): icmp_seq=2 ttl=64 time=0.210 ms

--- svr-02.bdfserver.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1030ms
rtt min/avg/max/mdev = 0.199/0.204/0.210/0.015 ms
Status
Code:
root@svr-02:/etc/pve# pvecm status
Quorum information
------------------
Date:             Thu Jul 11 22:53:17 2019
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000003
Ring ID:          3/264
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      3
Quorum:           3
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 192.168.102.12 (local)
0x00000002          1 192.168.102.13
0x00000004          1 192.168.102.14
but on the isolated node1
Code:
root@svr-01:~# pvecm status
Cannot initialize CMAP service
omping
Code:
root@svr-02:/etc/pve# omping -c 10000 -i 0.001 -F -q svr-01 svr-02 svr-03 svr-04
svr-01 : waiting for response msg
svr-03 : waiting for response msg
svr-04 : waiting for response msg
svr-03 : joined (S,G) = (*, 232.43.211.234), pinging
svr-04 : joined (S,G) = (*, 232.43.211.234), pinging
svr-01 : waiting for response msg
svr-01 : joined (S,G) = (*, 232.43.211.234), pinging
svr-03 : given amount of query messages was sent
svr-04 : given amount of query messages was sent
svr-01 : given amount of query messages was sent

svr-01 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.054/0.085/0.253/0.019
svr-01 : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.065/0.097/0.273/0.019
svr-03 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.060/0.295/2.590/0.137
svr-03 : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.108/0.355/2.650/0.136
svr-04 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.054/0.087/0.216/0.019
svr-04 : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.064/0.098/0.256/0.019
 
Last edited:

bferrell

Member
Nov 16, 2018
42
1
8
50
So... would it be easier to move the guest VM with (from this article) "mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/" and just delete the node from the cluster, and rejoin?
 

bferrell

Member
Nov 16, 2018
42
1
8
50
So, I'm seeing this in the logs, which is confusing, because when I look at the corosync.conf files they all say 6. Does this mean if I update all 4 of my nodes with version 9 they might re-sync OK?

Code:
Jul 11 22:50:14 svr-01 corosync[16928]:  [TOTEM ] A new membership (192.168.102.11:260) was formed. Members joined: 3 2 4
Jul 11 22:50:14 svr-01 corosync[16928]: error   [CMAP  ] Received config version (8) is different than my config version (7)! Exiting
Jul 11 22:50:14 svr-01 corosync[16928]:  [CMAP  ] Received config version (8) is different than my config version (7)! Exiting
Jul 11 22:50:14 svr-01 corosync[16928]: notice  [SERV  ] Unloading all Corosync service engines.
Jul 11 22:50:14 svr-01 corosync[16928]: info    [QB    ] withdrawing server sockets
Jul 11 22:50:14 svr-01 corosync[16928]: notice  [SERV  ] Service engine unloaded: c
 

bferrell

Member
Nov 16, 2018
42
1
8
50
OK, I think I've got it. That last status was what I needed. I'm not sure why it was showing such a newer version, but I set all 4 to the same config on the next highest version and restarted and they connected up. I'm going to work my way through restarting them all to be sure, then tomorrow I'll go back to moving the corosync config to the new network. BTW, I think the root of this all was a screw-up I somehow made to the interfaces file on node1 before I rebooted it. There's a lost day for being an idiot. :-(. Will report back.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,468
540
113
if you get yourself in this mess again, the following should get you out again:

stop pve-cluster and corosync on all nodes

Code:
systemctl stop pve-cluster corosync
take the old corosync.conf that was working, but bump config_version to a value higher than all the ones you tried in the meantime.
distribute that fixed corosync.conf to all nodes and put it in /etc/corosync/corosync.conf

then on all nodes, start pmxcfs in local mode:
Code:
pmxcfs -l
this will force quorum even though there is no clustering going on, so be very careful as you just made all the assumptions we normally have in a cluster invalid!

copy the working corosync.conf to /etc/pve/corosync.conf on all nodes

stop the local-mode pmxcfs again on all nodes
Code:
killall $(pidof pmxcfs)
restart pve-cluster and corosync on all nodes, and wait for them to establish quorum again
Code:
systemctl start pve-cluster corosync
normally (i.e., in a working, quorate cluster), you should only need to replace /etc/pve/corosync.conf with a new version (changed settings + bumped config_version), and it will be distributed to all nodes and reloaded. depending on what you changed, you might need to restart corosync on all nodes for the changes to take effect.
 

bferrell

Member
Nov 16, 2018
42
1
8
50
@fabian Thanks. I just want to change out the ring0_addr with the new separate network hostnames (move syn from 192.168.100.0/24 to 192.168.102.0/24)... which I will try again later today. Each node shows the second NIC up, and I just checked that omping works with the hostnames, so I think I should be good, right? And it doesn't so much matter which node I use for 'bindnetaddr'?

Also, looks like I've successfully moved to the new subnet. Is there an easy way to prove the packets are going out the new NIC?

Code:
root@svr-01:~# omping -c 10000 -i 0.001 -F -q 1corosync 2corosync 3corosync 4corosync
2corosync : waiting for response msg
3corosync : waiting for response msg
4corosync : waiting for response msg
2corosync : joined (S,G) = (*, 232.43.211.234), pinging
4corosync : joined (S,G) = (*, 232.43.211.234), pinging
3corosync : joined (S,G) = (*, 232.43.211.234), pinging
2corosync : given amount of query messages was sent
3corosync : given amount of query messages was sent
4corosync : given amount of query messages was sent

2corosync :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.094/0.145/0.327/0.021
2corosync : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.102/0.157/0.339/0.023
3corosync :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.093/0.174/0.398/0.048
3corosync : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.098/0.183/0.405/0.048
4corosync :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.094/0.142/0.352/0.028
4corosync : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.096/0.156/0.361/0.029
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,468
540
113
you can check the traffic ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!