I joined an existing cluster with 2 (I think) identical servers.
First one I joined using 'pvecm add 192.168.1.66'.
This one joined well, although I still have some strange problem using migrations (live migration tries to use the primary ip 10.1.51.41 of the new node to transfer, but this IP isn't routable for the older cluster nodes, strange enough they don't use the corosync-ip that I have set up/corrected to 192.168.1.66)
Corosync seemed to work
Second one I joined using 'pvecm add 192.168.1.66 -link0 192.168.1.202' (the secondary ip of this node). I hoped mitigating the migration-error that I experienced with the first new node.
But now, strange enough, I can't get corosync up on both new nodes. The config seems to have synchronized well to all nodes, but corosync won't start on the 2 new nodes.
Version of joining 2 new nodes:
pveversion
pve-manager/7.4-3/9002ab8a (running kernel: 5.15.102-1-pve)
4 older nodes (that are in production but will be removed after all is migrated...)
pve-manager/6.0-15/52b91481 (running kernel: 5.0.21-5-pve)
1 older node
pve-manager/6.2-4/9824574a (running kernel: 5.4.34-1-pve
/etc/pve/corosync.conf (seems to be in sync on all nodes)
First one I joined using 'pvecm add 192.168.1.66'.
This one joined well, although I still have some strange problem using migrations (live migration tries to use the primary ip 10.1.51.41 of the new node to transfer, but this IP isn't routable for the older cluster nodes, strange enough they don't use the corosync-ip that I have set up/corrected to 192.168.1.66)
Corosync seemed to work
Second one I joined using 'pvecm add 192.168.1.66 -link0 192.168.1.202' (the secondary ip of this node). I hoped mitigating the migration-error that I experienced with the first new node.
But now, strange enough, I can't get corosync up on both new nodes. The config seems to have synchronized well to all nodes, but corosync won't start on the 2 new nodes.
Code:
Apr 02 20:06:08 hv01 systemd[1]: Starting Corosync Cluster Engine...
Apr 02 20:06:08 hv01 corosync[1876883]: [MAIN ] Corosync Cluster Engine 3.1.7 starting up
Apr 02 20:06:08 hv01 corosync[1876883]: [MAIN ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf vqsim nozzle snmp pie relro bindnow
Apr 02 20:06:08 hv01 corosync[1876883]: [MAIN ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Apr 02 20:06:08 hv01 corosync[1876883]: [MAIN ] Please migrate config file to nodelist.
Apr 02 20:06:08 hv01 corosync[1876883]: [MAIN ] parse error in config: This totem parser can only parse version 2 configurations.
Apr 02 20:06:08 hv01 corosync[1876883]: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1445.
Apr 02 20:06:08 hv01 systemd[1]: corosync.service: Main process exited, code=exited, status=8/n/a
Apr 02 20:06:08 hv01 systemd[1]: corosync.service: Failed with result 'exit-code'.
Apr 02 20:06:08 hv01 systemd[1]: Failed to start Corosync Cluster Engine.
Version of joining 2 new nodes:
pveversion
pve-manager/7.4-3/9002ab8a (running kernel: 5.15.102-1-pve)
4 older nodes (that are in production but will be removed after all is migrated...)
pve-manager/6.0-15/52b91481 (running kernel: 5.0.21-5-pve)
1 older node
pve-manager/6.2-4/9824574a (running kernel: 5.4.34-1-pve
/etc/pve/corosync.conf (seems to be in sync on all nodes)
Code:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: VRT16
nodeid: 5
quorum_votes: 1
ring0_addr: 192.168.1.69
}
node {
name: VRT18
nodeid: 3
quorum_votes: 1
ring0_addr: 192.168.1.71
}
node {
name: hv01
nodeid: 6
quorum_votes: 1
ring0_addr: 192.168.1.201
}
node {
name: hv02
nodeid: 7
quorum_votes: 1
ring0_addr: 192.168.1.202
}
node {
name: vrt12
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.1.66
}
node {
name: vrt13
nodeid: 4
quorum_votes: 1
ring0_addr: 192.168.1.67
}
node {
name: vrt14
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.1.68
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: vrt
config_version: 15
interface {
bindnetaddr: 192.168.1.62
ringnumber: 0
}
ip_version: ipv4
secauth: on
version: 3
}
Last edited: