corosync/totem question

M-SK

Member
Oct 11, 2016
46
4
13
53
Hello,

We have a failed Proxmox 4.4->5.0 upgraded node that makes all cluster nodes (remaining 4) to fence themselves whenever it comes online. Seems I will have to reinstall the node for whatever reason.

I've just noticed something strange in corosync.conf while backing up the configuration:

cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: nthl12
nodeid: 1
quorum_votes: 1
ring0_addr: nthl12
}

node {
name: ntvl18
nodeid: 2
quorum_votes: 1
ring0_addr: ntvl18
}

node {
name: nthl11
nodeid: 4
quorum_votes: 1
ring0_addr: nthl11
}

node {
name: nthl03
nodeid: 3
quorum_votes: 1
ring0_addr: nthl03
}

node {
name: nthl16
nodeid: 5
quorum_votes: 1
ring0_addr: nthl16
}

}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: px
config_version: 7
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 172.20.10.7
ringnumber: 0
}

}

The Totem bindnetaddr node/IP was removed from the cluster some time ago. Should I remedy this and how?

Second question - can I "pvecm delnode" the failing node (the one that destroys quorum and fails SAN multipaths after upgrade whenever it's up) , remove auth key and reinstall/join cleanly under same name and IP?

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!