Upgrade cluster node to Proxmox 6 (re-install) when already part of Proxmox 5.4 cluster?

n1nj4888

Active Member
Jan 13, 2019
162
20
38
44
Hi,

I currently have a Proxmox 5.4 homelab cluster (2 nodes + external qdevice) and plan to rebuild the cluster as Proxmox 6 via a complete re-install of each Node (since I want to move the underlying local storage to ZFS on UEFI).

I’ve read the PVE6 upgrade guide (https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0), but it wasn’t clear to me on the following:

(1) Can I leave Node 2 up (with PVE5.4) whilst rebuilding Node 1 as PVE6 with the same hostname, cluster name, IP etc configuration it had under PVE5.4? Interested to know whether this will cause issues with Node 2 (or the external qdevice) trying to reconnect to the new PVE6 cluster on Node 1 thinking it is still PVE5.4? The reason I’d like to keep Node 2 on PVE5.4 is to move all the VMs to Node 2 to minimise VM downtime whilst rebuilding Node 1 and also just to test to ensure any issues seen on PVE6 on Node 1 exist on Node 2 PVE5.4... After a stabilisation period, I’d then rebuild Node 2 as PVE6 and add it to the new PVE6 cluster. I’d also then potentially have to rebuild / re-add the existing PVE5.4 qdevice to the new PVE6 cluster.

(2) I assume PVE6 doesn’t require any changes to the existing PVE5.4 qdevice (for example does it require a later version of qdevice to be installed)?


Looking forward to installing PVE6!


Thanks!
 
Hi,

I currently have a Proxmox 5.4 homelab cluster (2 nodes + external qdevice) and plan to rebuild the cluster as Proxmox 6 via a complete re-install of each Node (since I want to move the underlying local storage to ZFS on UEFI).

I’ve read the PVE6 upgrade guide (https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0), but it wasn’t clear to me on the following:

(1) Can I leave Node 2 up (with PVE5.4) whilst rebuilding Node 1 as PVE6 with the same hostname, cluster name, IP etc configuration it had under PVE5.4? Interested to know whether this will cause issues with Node 2 (or the external qdevice) trying to reconnect to the new PVE6 cluster on Node 1 thinking it is still PVE5.4? The reason I’d like to keep Node 2 on PVE5.4 is to move all the VMs to Node 2 to minimise VM downtime whilst rebuilding Node 1 and also just to test to ensure any issues seen on PVE6 on Node 1 exist on Node 2 PVE5.4... After a stabilisation period, I’d then rebuild Node 2 as PVE6 and add it to the new PVE6 cluster. I’d also then potentially have to rebuild / re-add the existing PVE5.4 qdevice to the new PVE6 cluster.

What you write is possible if you know your way around the cluster stack (and you'd need to upgrade Corosync on Node 2 to Corosync 3.x as described in the Upgrade guide, otherwise Corosync on Node 1 and 2 cannot talk to eachother). I highly recommend testing the whole scenario in a virtual cluster first!

(2) I assume PVE6 doesn’t require any changes to the existing PVE5.4 qdevice (for example does it require a later version of qdevice to be installed)?

Corosync 3.x qdevice (the part running on the "full" cluster nodes) is compatible with Corosync 2.x qnetd (the part running on the external, vote-only node), and vice versa. I do recommend upgrading the qnetd host to Corosync 3 as well after your nodes have been upgraded (e.g., by upgrading it to Buster if it is a Debian host).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!