Hello,
I have a cluster with 3 nodes (Debian 11, PVE 7.1.x) and try to add a 4. node (with GUI, described here). The IP from the new node are added to the firewall from the cluster. I can ssh from every node to the new node and back. But it does not work, on the new node:
	
	
	
		
On the old node:
	
	
	
		
On the new node:
	
	
	
		
The /etc/pve/corosync.conf on the new node:
	
	
	
		
and on the new node:
	
	
	
		
Any ideas whats wrong?
Edit: I also add all IP's, hostnames and short names (node-3, node-1, ..) to the /etc/hosts from the new node.
				
			I have a cluster with 3 nodes (Debian 11, PVE 7.1.x) and try to add a 4. node (with GUI, described here). The IP from the new node are added to the firewall from the cluster. I can ssh from every node to the new node and back. But it does not work, on the new node:
		Code:
	
	root@node-neu ~ # pvecm status
Cluster information
-------------------
Name:             Example
Config Version:   39
Transport:        knet
Secure auth:      on
Quorum information
------------------
Date:             Wed Feb 16 10:44:33 2022
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000004
Ring ID:          4.a6d9
Quorate:          No
Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      1
Quorum:           3 Activity blocked
Flags:
Membership information
----------------------
    Nodeid      Votes Name
0x00000004          1 4.4.4.4 (local)
root@node-neu ~ #On the old node:
		Code:
	
	root@node-3 ~ # pvecm status
Cluster information
-------------------
Name:             Example
Config Version:   39
Transport:        knet
Secure auth:      on
Quorum information
------------------
Date:             Wed Feb 16 10:48:35 2022
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000002
Ring ID:          1.9f85
Quorate:          Yes
Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      3
Quorum:           3
Flags:            Quorate
Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 1.1.1.1
0x00000002          1 3.3.3.3 (local)
0x00000003          1 2.2.2.2
root@node-3 ~ #On the new node:
		Code:
	
	root@node-neu ~ # ls -al /etc/pve/
insgesamt 5
drwxr-xr-x  2 root www-data    0  1. Jan 1970  .
drwxr-xr-x 88 root root     4096 16. Feb 09:54 ..
-r--r-----  1 root www-data  358  1. Jan 1970  .clusterlog
-r--r-----  1 root www-data  621 16. Feb 10:13 corosync.conf
-rw-r-----  1 root www-data    2  1. Jan 1970  .debug
lr-xr-xr-x  1 root www-data    0  1. Jan 1970  local -> nodes/node-neu
lr-xr-xr-x  1 root www-data    0  1. Jan 1970  lxc -> nodes/node-neu/lxc
-r--r-----  1 root www-data  316  1. Jan 1970  .members
lr-xr-xr-x  1 root www-data    0  1. Jan 1970  openvz -> nodes/node-neu/openvz
lr-xr-xr-x  1 root www-data    0  1. Jan 1970  qemu-server -> nodes/node-neu/qemu-server
-r--r-----  1 root www-data  213  1. Jan 1970  .rrd
-r--r-----  1 root www-data  777  1. Jan 1970  .version
-r--r-----  1 root www-data   18  1. Jan 1970  .vmlist
root@node-neu ~ #The /etc/pve/corosync.conf on the new node:
		Code:
	
	  debug: off
  to_syslog: yes
}
nodelist {
  node {
    name: node-3
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 3.3.3.3
  }
  node {
    name: node-1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 1.1.1.1
  }
  node {
    name: node-3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 2.2.2.2
  }
  node {
    name: node-neu
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 4.4.4.4
  }
}
quorum {
  provider: corosync_votequorum
}
totem {
  cluster_name: Example
  config_version: 39
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}and on the new node:
		Code:
	
	root@node-neu ~ # systemctl status -l pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-02-16 10:28:08 CET; 35min ago
    Process: 963 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 965 (pmxcfs)
      Tasks: 6 (limit: 154403)
     Memory: 31.4M
        CPU: 547ms
     CGroup: /system.slice/pve-cluster.service
             └─965 /usr/bin/pmxcfs
Feb 16 10:28:07 node-neu pmxcfs[965]: [dcdb] crit: cpg_initialize failed: 2
Feb 16 10:28:07 node-neu pmxcfs[965]: [dcdb] crit: can't initialize service
Feb 16 10:28:07 node-neu pmxcfs[965]: [status] crit: cpg_initialize failed: 2
Feb 16 10:28:07 node-neu pmxcfs[965]: [status] crit: can't initialize service
Feb 16 10:28:08 node-neu systemd[1]: Started The Proxmox VE cluster filesystem.
Feb 16 10:28:13 node-neu pmxcfs[965]: [status] notice: update cluster info (cluster name  Example, version = 39)
Feb 16 10:28:22 node-neu pmxcfs[965]: [dcdb] notice: members: 4/965
Feb 16 10:28:22 node-neu pmxcfs[965]: [dcdb] notice: all data is up to date
Feb 16 10:28:22 node-neu pmxcfs[965]: [status] notice: members: 4/965
Feb 16 10:28:22 node-neu pmxcfs[965]: [status] notice: all data is up to date
root@node-neu ~ #Any ideas whats wrong?
Edit: I also add all IP's, hostnames and short names (node-3, node-1, ..) to the /etc/hosts from the new node.
			
				Last edited: 
				
		
	
										
										
											
	
										
									
								 
	 
	 
 
		