Adding node to cluster crash all nodes from it

alexandrer

New Member
Mar 31, 2021
7
0
1
27
Hi, tonight is the second time I face a huge problem when trying to add a node to my existing cluster.

My cluster contains approximately 15 nodes, I use CEPH as storage and everything is working pretty good.

All our nodes and "future" nodes FQDN are contained in /etc/hosts like :
- 100.118.100.1 pve1 pve1.beecluster.abeille.com - 100.118.100.2 pve2 pve2.beecluster.abeille.com ... - 100.118.100.254 pve254 pve254.beecluster.abeille.com

/etc/network/interfaces
#Hypervisor interface auto eno1 iface eno1 inet dhcp auto eno2 iface eno2 inet manual #For VMs auto vmbr0 inet vmbr0 inet manual bridge_ports eno2 bridge_fd 0 bridge_stp off #10Gbits CEPH auto enp1s0f0 iface enp1s0f0 inet static address 192.168.100.<Pve number> netmask 255.255.255.0 broadcast 192.168.100.255 mtu 9000

(over-provisioned yes)

But, sometimes (not for all adding process) and for no reason, when I try to add a node to this cluster with command pvecm add pve1, I got an infinite load of "waiting for quorum" and everytime, all nodes contained in the cluster are unreachable or restarting.

The two times this happened, I only find one-way solution to re-up all my nodes (without the failed node) :
- Shutting down all nodes
- Power-on node after node and check if they are reachable from corosync

Tonight, trouble where a few different : ping between nodes where available but they cannot be in the same cluster. After restarting a lot and a lot all our switches and routers, the "one-way fix" solved the trouble.

I'm confused about this problem because I will have to add a lot of nodes to this cluster and I can't permit this thing everytime, did someone have an idea of what can happen ?
 
Last edited:
Hi,

are you sure that your cluster network is working with low enough latencies?

Please attach as file or copy&paste the output of the following commands using the Code function (there is a button </>) of the forum:
Code:
pvecm status
cat /etc/pve/corosync.conf
cat /var/log/syslog
 
Hi, thanks for your attention. Our cluster are using a dedicated Gigabit Ethernet port for corosync

Thank return of commands :
Code:
root@pve1:~# pvecm status
Cluster information
-------------------
Name:             beeHosting
Config Version:   34
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Thu Apr  1 15:29:25 2021
Quorum provider:  corosync_votequorum
Nodes:            16
Node ID:          0x00000001
Ring ID:          1.d082
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   16
Highest expected: 16
Total votes:      16
Quorum:           9  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 100.118.100.1 (local)
0x00000002          1 100.118.100.2
0x00000003          1 100.118.100.3
0x00000004          1 100.118.100.201
0x00000005          1 100.118.100.15
0x00000006          1 100.118.100.8
0x00000007          1 100.118.100.9
0x00000008          1 100.118.100.10
0x00000009          1 100.118.100.14
0x0000000a          1 100.118.100.11
0x0000000b          1 100.118.100.12
0x0000000c          1 100.118.100.202
0x0000000d          1 100.118.100.203
0x0000000e          1 100.118.100.204
0x0000000f          1 100.118.100.205
0x00000010          1 100.118.100.4

Code:
root@pve1:~# cat /etc/pve/corosync.conf 
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 100.118.100.1
  }
  node {
    name: pve10
    nodeid: 8
    quorum_votes: 1
    ring0_addr: 100.118.100.10
  }
  node {
    name: pve11
    nodeid: 10
    quorum_votes: 1
    ring0_addr: 100.118.100.11
  }
  node {
    name: pve12
    nodeid: 11
    quorum_votes: 1
    ring0_addr: 100.118.100.12
  }
  node {
    name: pve14
    nodeid: 9
    quorum_votes: 1
    ring0_addr: 100.118.100.14
  }
  node {
    name: pve15
    nodeid: 5
    quorum_votes: 1
    ring0_addr: 100.118.100.15
  }
  node {
    name: pve2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 100.118.100.2
  }
  node {
    name: pve201
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 100.118.100.201
  }
  node {
    name: pve202
    nodeid: 12
    quorum_votes: 1
    ring0_addr: 100.118.100.202
  }
  node {
    name: pve203
    nodeid: 13
    quorum_votes: 1
    ring0_addr: 100.118.100.203
  }
  node {
    name: pve204
    nodeid: 14
    quorum_votes: 1
    ring0_addr: 100.118.100.204
  }
  node {
    name: pve205
    nodeid: 15
    quorum_votes: 1
    ring0_addr: 100.118.100.205
  }
  node {
    name: pve3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 100.118.100.3
  }
  node {
    name: pve4
    nodeid: 16
    quorum_votes: 1
    ring0_addr: 100.118.100.4
  }
  node {
    name: pve8
    nodeid: 6
    quorum_votes: 1
    ring0_addr: 100.118.100.8
  }
  node {
    name: pve9
    nodeid: 7
    quorum_votes: 1
    ring0_addr: 100.118.100.9
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: beeHosting
  config_version: 34
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

I attach syslog compressed file because it's very verbose. I added with success a node named pve4 (100.118.100.4) and crash append when I tried to add the pve5 (100.118.100.5), below 7:20 PM (7:05 PM - 7:15 PM)
 

Attachments

  • syslog.tar.gz
    995.7 KB · Views: 3
Crash starting exactly after this line :
Code:
Mar 30 19:11:26 pve1 pvedaemon[1080010]: <root@pam> adding node pve5 to cluster
 
I am confused, because there is no 100.118.100.X network in your network configuration?
 
Network 100.118.100.X come from interface eno1 configured in DHCP (/etc/network/interface)

Also I missed to mention something about this eno1 interface : it's not "fully" dedicated to corosync

We are using iKVM server monitoring accross this interface too, configured as shared LAN and not with dedicated LAN port. So for switches, one ethernet port have 2 mac addresses : 100.118.100.X (corosync - Proxmox GUI) and 100.118.254.X (iKVM).

Currently with associates, we are thinking this mix of iKVM and corosync in a same interface could corrupt broadcast from corosync when we add a node, is the idea consistent ?

PS : network is 100.118.x.x/16, so iKVM and corosync are mixed.
 
Why do you use DHCP for your cluster network? This might be one source of problems.

In PVE 6 Corosync uses Unicast. Other traffic on the Corosync network doesn't immediately destroy everything. But it makes problem solving a lot easier and many cluster problems are magically solved by having a dedicated network for it. See also network requirements https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_cluster_network

Could you please also post your ceph config? Are you using the most recent version of PVE?
 
I switched all nodes to static address, it was a bad idea from the start indeed.

Our Proxmox version is 6.3-4

We found something with my associate in directory /etc/pve/nodes/ : pve5 was a previous node that we removed a few months with command pvecm delnode pve5, but directory pve5 still existing with files like pve_ssl created in 2020.

root@pve1:~# ls /etc/pve/nodes/ pve1 pve10 pve11 pve12 pve13 pve14 pve15 pve2 pve201 pve202 pve203 pve204 pve205 pve3 pve4 pve5 pve6 pve7 pve8 pve9

root@pve1:~# ls -la /etc/pve/nodes/pve5/ total 2 drwxr-xr-x 2 root www-data 0 févr. 19 2020 . drwxr-xr-x 2 root www-data 0 nov. 5 2019 .. -rw-r----- 1 root www-data 83 mars 30 19:58 lrm_status drwxr-xr-x 2 root www-data 0 févr. 19 2020 lxc drwxr-xr-x 2 root www-data 0 févr. 19 2020 openvz drwx------ 2 root www-data 0 févr. 19 2020 priv -rw-r----- 1 root www-data 1675 févr. 19 2020 pve-ssl.key -rw-r----- 1 root www-data 1659 févr. 19 2020 pve-ssl.pem drwxr-xr-x 2 root www-data 0 févr. 19 2020 qemu-server

All our thoughts is about this confusion between previous and new pve_ssl files. Executing pvecm delnode <NODE> don't delete directory node ?

PS: This pve5 is a different server than previous (just same hostname), so Proxmox is fully fresh install
 
Last edited:
I only just paid attention to the section "Re-use hostname or IP"

I have some nodes out of the cluster (deprived), shoud I execute pvecm updatecerts right now to clean every files nodes before adding new nodes ?

I will add 3 nodes to cluster next week, I will not re-use anymore used hostname/IP, it's so stressful to see cluster down.
 
I reinstalled pve5 to pve16 (hostname and IP never existed in cluster) and the problem returns. We are a few desperate because we need to add nodes to cluster and every add of this node crash the cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!