[SOLVED] Join fresh install of PVE 6.1 to 6.0 cluster

Thomas Plant

Member
Mar 28, 2018
93
1
13
54
Hello,

we'd like to add two servers to our existing Promox 6.0 Cluster. Can we use the 6.1 iso's or better first install 6.0 to join the cluster and upgrade afterwards to 6.1?

Thanks,
Thomas
 
Cause having two more servers, we can distribute better the existing VMs and then we would upgrade the cluster from 6.0 to 6.1.

Thanks for the answer.
 
Joined the Server successfully

Stupid me, on one I added the wrong network for the cluster network, it has now the network of our storage. Can I adjust this?
 
on one I added the wrong network for the cluster network, it has now the network of our storage. Can I adjust this?

And it did still worked, do you have multiple links for corosync/knet cluster traffic?

Normally it can be changed, but it would be good to have more infos, not that I give wrong directions:

Can you post
Code:
pvecm status
cat /etc/pve/corosync.conf
 
Yes, cluster works.
We have the an internal net we normally use for Webconsole/Cluster and a storage network, which I wrongly clicked as link0.

Her the information you requested:

Code:
root@pve6:~# pvecm status
Cluster information
-------------------
Name:             PVECLUSTER01
Config Version:   6
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Dec 10 16:20:31 2019
Quorum provider:  corosync_votequorum
Nodes:            4
Node ID:          0x00000003
Ring ID:          1.f8
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      4
Quorum:           3
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.10.2.90
0x00000002          1 10.10.2.91
0x00000003          1 192.168.21.32 (local)
0x00000004          1 10.10.2.93

Code:
root@pve6:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve4
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.2.90
  }
  node {
    name: pve5
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.2.91
  }
  node {
    name: pve6
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.21.32
  }
  node {
    name: pve7
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 10.10.2.93
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: PVECLUSTER01
  config_version: 6
  interface {
    bindnetaddr: 10.10.2.90
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

I was reading https://pve.proxmox.com/wiki/Cluster_Manager and under 'Separate After Cluster Creation' I can simply edit corosync.conf, editing the wrong IP and adjusting the 'config_version' to 7 would do the job?

Thanks for your help
 
I was reading https://pve.proxmox.com/wiki/Cluster_Manager and under 'Separate After Cluster Creation' I can simply edit corosync.conf, editing the wrong IP and adjusting the 'config_version' to 7 would do the job?
yes. Edit /etc/pve/corosync.conf on a node with a correct IP (i.e., one which is quorate), then copy that over to the one with the incorrect IP (which doesn't automatically gets the change, because it's not quorate), copy it there to /etc/corosync/corosync.conf (note the different path) and restart corosync: systemctl restart corosync
 
sorry, this did not work. Still got Quorum: 3

root@pve6:~# pvecm status
Cluster information
-------------------
Name: PVECLUSTER01
Config Version: 7
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Wed Dec 11 08:47:30 2019
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000003
Ring ID: 1.10c
Quorate: Yes

Votequorum information
----------------------
Expected votes: 4
Highest expected: 4
Total votes: 4
Quorum: 3
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.10.2.90
0x00000002 1 10.10.2.91
0x00000003 1 10.10.2.92 (local)
0x00000004 1 10.10.2.93

Shouldn't I edit the 'Config_Version' maybe?
 
sorry, this did not work. Still got Quorum: 3
It did work, the "Quorum 3" just denotes how many votes your cluster needs to be quorate, which is 3 for a 4 node cluster (odd node counts are more ideal)

Expected votes: 4
Highest expected: 4
Total votes: 4

Expected == Total votes, thus all is well. it seems that the "wrong" address was not a real issue as all nodes could also communicate with the "wrong address"-node over that network.

But you changed it now succesffuyll to the 10.10.2.0/20 net, so all well, I'd say.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!