(UDPU) Waiting for Quorum (Stucked)

masterdaweb

Active Member
Apr 17, 2017
87
4
28
31
I have a cluster with 3 nodes working smoothly, I'm using UDPU instead of Multicast.

When I try to add the 4th node, it hangs on Waiting for Quorum.

Here's is my Corosync:

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: ns524364
    nodeid: 1
    quorum_votes: 1
    ring0_addr: ns524364
  }

  node {
    name: ns545353
    nodeid: 4
    quorum_votes: 1
    ring0_addr: ns545353
  }

  node {
    name: ns541194
    nodeid: 3
    quorum_votes: 1
    ring0_addr: ns541194
  }

  node {
    name: ns535698
    nodeid: 2
    quorum_votes: 1
    ring0_addr: ns535698
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: mamae
  config_version: 62
  ip_version: ipv4
  secauth: on
  transport: udpu
  version: 2
  interface {
    ringnumber: 0
  }

}
 
I have a cluster with 3 nodes working smoothly, I'm using UDPU instead of Multicast.

When I try to add the 4th node, it hangs on Waiting for Quorum.

Here's is my Corosync:

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: ns524364
    nodeid: 1
    quorum_votes: 1
    ring0_addr: ns524364
  }

  node {
    name: ns545353
    nodeid: 4
    quorum_votes: 1
    ring0_addr: ns545353
  }

  node {
    name: ns541194
    nodeid: 3
    quorum_votes: 1
    ring0_addr: ns541194
  }

  node {
    name: ns535698
    nodeid: 2
    quorum_votes: 1
    ring0_addr: ns535698
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: mamae
  config_version: 62
  ip_version: ipv4
  secauth: on
  transport: udpu
  version: 2
  interface {
    ringnumber: 0
  }

}
Have you looked at:
https://pve.proxmox.com/wiki/Multic....29_instead_of_multicast.2C_if_all_else_fails

Do you have a mapping of hostnames to node ip in a DNS or each cluster nodes /etc/host file?
If not, use the "real" IPs for the nodes "ring0_addr" parameters.
 
  • Like
Reactions: masterdaweb
Thank you for supporting me guys.

I figured it out using command "pvecm e 1" on the 4th node, and than I could successfully add to the cluster.

Many thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!