Remove corosync ring 0(link0) network without rebooting nodes

semira uthsala

Well-Known Member
Nov 19, 2019
43
7
48
34
Singapore
Hi all,

We have 12 node cluster with 6x ceph nodes and 6x compute nodes.(ceph nodes not running any vms). corosync connected via two links. one dedicated link and one shared (mgmt) link.

Ceph configured with dedicated backend sync network as well

We are changing some of the switch config and cleaning up the network portion. also adding two new compute nodes. for this I need to remove dedicated corosync link(link0) and move all corosync traffic to mgmt link(link1).

We are not using any HA services and pve-ha-lrm/pve-ha-crm services disabled on all nodes. no watchdog timers are running

can I directly edit the /etc/pve/corosync.conf and remove the ring0(link0) entries and restart the corosysnc on all nodes ?

and we really need to skip the node reboot part. specially ceph nodes

what is the best way to do this ?

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve-compute-01
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 192.168.17.180
    ring1_addr: 192.168.27.180
  }
  node {
    name: pve-compute-02
    nodeid: 5
    quorum_votes: 1
    ring0_addr: 192.168.17.181
    ring1_addr: 192.168.27.181
  }
  node {
    name: pve-compute-03
    nodeid: 6
    quorum_votes: 1
    ring0_addr: 192.168.17.182
    ring1_addr: 192.168.27.182
  }
  node {
    name: pve-compute-04
    nodeid: 7
    quorum_votes: 1
    ring0_addr: 192.168.17.183
    ring1_addr: 192.168.27.183
  }
  node {
    name: pve-compute-05
    nodeid: 8
    quorum_votes: 1
    ring0_addr: 192.168.17.184
    ring1_addr: 192.168.27.184
  }
  node {
    name: pve-compute-06
    nodeid: 9
    quorum_votes: 1
    ring0_addr: 192.168.17.185
    ring1_addr: 192.168.27.185
  }
  node {
    name: pve-storage-01
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.17.170
    ring1_addr: 192.168.27.170
  }
  node {
    name: pve-storage-02
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.17.171
    ring1_addr: 192.168.27.171
  }
  node {
    name: pve-storage-03
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.17.172
    ring1_addr: 192.168.27.172
  }
  node {
    name: pve-storage-04
    nodeid: 10
    quorum_votes: 1
    ring0_addr: 192.168.17.173
    ring1_addr: 192.168.27.173
  }
  node {
    name: pve-storage-05
    nodeid: 11
    quorum_votes: 1
    ring0_addr: 192.168.17.174
    ring1_addr: 192.168.27.174
  }
  node {
    name: pve-storage-06
    nodeid: 12
    quorum_votes: 1
    ring0_addr: 192.168.17.175
    ring1_addr: 192.168.27.175
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: pve-cluster
  config_version: 12
  interface {
    linknumber: 0
  }
  interface {
    linknumber: 1
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}
 
Thanks for the reply.

Is their any risk doing this while nodes are running specially ceph ?

Any ceph related communication happens through this link as well ?