Issues with cluster config

Feb 18, 2019
9
0
21
43
I've set up a new PVE instance, created the cluster configuration using the web UI, and installed a second PVE instance which should join the new created cluster.
Unfortunately the host entry of the second instance was wrong and pointed to a wrong IP address and after the join process the IP of the second server was wrong. Changing the IP of the server and in the corosync config didn't help, so I wanted to remove the cluster node again.

I've tried to remove the second cluster node using

```
pvecm expected 1
pvecm delnode 2
```

and

`pvecm nodes` only shows one nodes again.

But the web UI still has the second node listed with an error symbol on it.
I've reinstalled node 2 and wanted to join the node again but the web UI does not provide the cluster join information any more. I could create a new cluster using a new name, because PVE tells me the old name is still in use.

What's the correct approach to fix this issue?

Also there is a VM on node 1 running which should not be taken offline without knowing it :)

Thanks for your help!
 
Is there anything left in /etc/pve/nodes/<node2>? This could explain why it still shows in the GUI.
Please post the output of 'pvecm status' and /etc/pve/corosync.conf
 
Is there anything left in /etc/pve/nodes/<node2>?
No


Please post the output of 'pvecm status' and /etc/pve/corosync.conf
You're right. The corosync.conf has still the node2 entry.

Code:
[12:32]root@fullipgvh-n2:~# pvecm status
Quorum information
------------------
Date:             Mon Feb 18 12:33:25 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/32
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.10.6.3 (local)
[12:33]root@fullipgvh-n2:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: fullipgvh-n1
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.6.2
  }
  node {
    name: fullipgvh-n2
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.6.3
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: fullipgvh
  config_version: 2
  interface {
    bindnetaddr: 10.10.6.3
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

PS: Don't be confused. the n2 labeled node is node 1 from PVE point of view and vice versa :)
 
Last edited:
Looks like you'll have to manually remove it from the corosync config.
 
That should be it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!