Cluster setup: corosync.conf had mixed ipv4/6 node entries after reboot

rdtsupport

New Member
Nov 15, 2024
23
0
1
While joining the cluster with node "standby", i used ipv4 adresses only , but we ended up with this:


# cat /etc/corosync/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: m9
nodeid: 1
quorum_votes: 1
ring0_addr: 83.a.b.c
}
node {
name: standby
nodeid: 2
quorum_votes: 1
ring0_addr: 2a01:4f9:5a:xxxx::x
}
}

quorum {
provider: corosync_votequorum
}
...

I manually fixed corosync.conf , restarted corosync and it worked.

After a reboot, the corosync.conf was back to mixed ipv4/6 as above and corosync failed to start due to this.

a manual fix later and it started again, but i'm unwilling to check it it happens again.

I think this a bug, which i really like to get rid off. Any ideas?
 
Hi rdtsupport,

you need to fix `/etc/pve/corosync.conf` (the corosync configuration file on the pmxcfs cluster filesystem, that's available to all of your nodes). The local-per-node file `/etc/corosync/corosync.conf` gets synced from this file.

Please check the docs [0] for details and make sure to increment the `config_version` number when updating the file.

I hope this helps!

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_configuration
 
THX. That file was faulty..

As this was autocreated, something is buggy if the cluster master is ipv4 only and the not-so-master is mixed ipv4/6 . 3 pairs of exe saw, that we used ipv4 addresses only to setup the cluster.