Proxmox 6 Cluster with Proxmox 5.4?

it's possible to add Proxmox 6 nodes with existing Proxmox 5.x cluster?

No, corosync 3, which will be shipped with Proxmox VE 6, changed it's on-the-wire format in an incompatible way.
The underlying transport mechanism also changed from their own multicast-UPD stack to kronosnet, which for now is unicast only (multicast support is planned, but not on the horizon).

But, we tried to avoid the need for fully rebuilding every 5.x based cluster. We provide a separate package repository with Corosync 3 build for Proxmox VE 5 which can be used to upgrade a cluster to the new corosync and it's new on-the-wire format before doing the dist-upgrade to PVE 6. The configuration can stay more or less the same, while older installation may need to update it a bit, we created a checklist helper tool which can assist on finding if you need to do something (called simply pve5to6).
For some more information see https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 which you could already use to test the upgrade in a test system, which can also be a virtual (nested) Proxmox VE.
 
  • Like
Reactions: alexskysilk
Hi.

Yes, it is possible. See: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Actions_step-by-step

I'd really recommend to test the upgrade in a testing setup first. If you have already a test cluster, great, else you could also create a "virtual" one. Meaning, creating a few VMs (3 would be a really good number here, not to big but big enough so simulate most real situations) with Proxmox VE 5.4 inside, cluster it, maybe create some small VMs for testing migration, and then try out the upgrade how-to with that cluster. It allows to get a feeling for the upgrade process and increases the chance of a successful upgrade. Most important things is to ensure not upgrade step got left out.
 
  • Like
Reactions: Pourya Mehdinejad
Hi.

Yes, it is possible. See: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Actions_step-by-step

I'd really recommend to test the upgrade in a testing setup first. If you have already a test cluster, great, else you could also create a "virtual" one. Meaning, creating a few VMs (3 would be a really good number here, not to big but big enough so simulate most real situations) with Proxmox VE 5.4 inside, cluster it, maybe create some small VMs for testing migration, and then try out the upgrade how-to with that cluster. It allows to get a feeling for the upgrade process and increases the chance of a successful upgrade. Most important things is to ensure not upgrade step got left out.
Al right, I'll test it and update you
 
Al right, I'll test it and update you

In the link you provided it is said:
Note: changes to any VM/CT or the cluster in general are not allowed for the duration of the upgrade!

Then somewhere below it says:
Move important Virtual Machines and Containers
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is currently upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is in general not supported. Keep this in mind when planning your cluster upgrade.


But as I'm doing this in a test env, I see once one of the nodes get upgraded, it is no longer in the quorum and we cannot migrate any VM to it. So how should we prevent a downtime for VMs while doing these upgrade?
Do VMs get interrupted if they remain on a node which it's corosync being upgraded?
 
But as I'm doing this in a test env, I see once one of the nodes get upgraded, it is no longer in the quorum and we cannot migrate any VM to it

Then the upgrade was not successfully. Did you upgraded the 5.4 nodes to Corsync 3 using the extra repository first? This is crucial.
If a node is not quorate it probably means that the remaining old ones still run corosync 2 and so cannot "see" the upgraded node anymore.

Do VMs get interrupted if they remain on a node which it's corosync being upgraded?
Normally not, or better said, not if you follow the upgrade how-to and disable HA, like described: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Cluster:_always_upgrade_to_Corosync_3_first
 
Then the upgrade was not successfully. Did you upgraded the 5.4 nodes to Corsync 3 using the extra repository first? This is crucial.
If a node is not quorate it probably means that the remaining old ones still run corosync 2 and so cannot "see" the upgraded node anymore.

No they were all became part of a healthy quorum. That happened during the upgrading 5.4 nodes to corosync 3.
Whenever one of the nodes got upgraded to corosync 3, it wasn't part of the quorum anymore, so my questions was how to migrate the VMs to them to upgrade the other ones.
 
Whenever one of the nodes got upgraded to corosync 3, it wasn't part of the quorum anymore

Ah, OK, misunderstood you then a bit. Yes that's expected and should not cause any trouble. During the corosync upgrade you do not need (and really should not try) to migrate VMs or CTs, they keep running normally. Once the full cluster is upgraded to corosync 3 (but still on PVE 5.4) you can continue with the "real" upgrade. Only then you have to migrate the VMs to other nodes to keep them running during that node is getting upgraded. That works again as all nodes are already on corosync 3 and can see each other, even after being upgraded.

Hope that makes it a bit more clear.
 
  • Like
Reactions: Pourya Mehdinejad
Ah, OK, misunderstood you then a bit. Yes that's expected and should not cause any trouble. During the corosync upgrade you do not need (and really should not try) to migrate VMs or CTs, they keep running normally. Once the full cluster is upgraded to corosync 3 (but still on PVE 5.4) you can continue with the "real" upgrade. Only then you have to migrate the VMs to other nodes to keep them running during that node is getting upgraded. That works again as all nodes are already on corosync 3 and can see each other, even after being upgraded.

Hope that makes it a bit more clear.
Got it, thanks
Appreciate the response
 
I am also trying to do the same upgrade. Apologies ahead of time for hijacking a old post. I have a 5 cluster 5.4-15. So far I have it all sitting just after the corosync 3.0 upgrade which has been successfully installed and started servers in a quorum, and removed the 1 failure I had running pve5to6.

My underlying storage is kind of slow, and it will be a colossal effort to migrate machines off nodes.

Could I temporarily power down VM's, and CT's off hours, do the upgrade to 6.0+ and just boot back up?
 
I am also trying to do the same upgrade. Apologies ahead of time for hijacking a old post. I have a 5 cluster 5.4-15. So far I have it all sitting just after the corosync 3.0 upgrade which has been successfully installed and started servers in a quorum, and removed the 1 failure I had running pve5to6.

My underlying storage is kind of slow, and it will be a colossal effort to migrate machines off nodes.

Could I temporarily power down VM's, and CT's off hours, do the upgrade to 6.0+ and just boot back up?

sure. using live-migration to empty a node before upgrading and rebooting is just the way to avoid downtime altogether - you don't have to go down that route if downtime is no/little concern in your environment :)
 
it should work for the duration of the upgrade (live-migration is only tested for old->new)
 
yes, that should work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!