Node version for clustering

  • Thread starter Thread starter Deleted member 93625
  • Start date Start date
D

Deleted member 93625

Guest
Hi team,

The cluster manager page shows some requirements for joining nodes into a cluster. It says all nodes should have the same version. I'd like to ask some questions regarding this.

1592463542988.png

  1. Does this mean all the numbers should be the same including minor version numbers? For example, if node A has version 6.2.1, the other nodes should have the version 6.2.1 before joining? Or, is it a bit loose, say, version 5 with version 5, version 6 with version 6?
  2. If a node joining the cluster has higher version number, will it be a problem? For example, node A and B have 6.2.1 and clustered already, and if node C coming with 6.2.2 or higher (6.3.1 for example), will the cluster break?
  3. What about the opposite situation? Say, if node C has 6.2.0 or 6.1.5 for example, will it be okay?
Thanks a lot.

Eoin
 
hi,

Does this mean all the numbers should be the same including minor version numbers? For example, if node A has version 6.2.1, the other nodes should have the version 6.2.1 before joining? Or, is it a bit loose, say, version 5 with version 5, version 6 with version 6?
ideally it should be all the same, but usually it's enough to have them on the same major versions. from version 5 to 6 there were some changes with the protocols used by corosync, causing communication errors between the two versions. that's why you shouldn't mix nodes with versions 5 & 6 in the same cluster.

If a node joining the cluster has higher version number, will it be a problem? For example, node A and B have 6.2.1 and clustered already, and if node C coming with 6.2.2 or higher (6.3.1 for example), will the cluster break?
probably not, but you should still update them to the same version ASAP ;)
 
Hi @oguz ,

Thanks for that. Regarding the upgrade, I guess of course, all VMs should be powered off on the node or should be migrated into other nodes before the upgrade? Is this because for VMs availability or actually running VMs on the node making something bad during upgrade?

Eoin
 
, I guess of course, all VMs should be powered off on the node or should be migrated into other nodes before the upgrade?
yes.

Is this because for VMs availability or actually running VMs on the node making something bad during upgrade?
it's mainly for VM availability. live migration ensures downtime stays minimal while upgrading the host. after upgrade is complete and the node is rebooted, you can migrate it back.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!