Upgrade test scenario (proxmox 5 to 6)

macleod

Well-Known Member
Aug 3, 2017
66
8
48
46
Prerequisites for this test scenario:
- 3 VMs with 2 eth (first one configured), installed proxmox 5.4 on test1, test2, test3, updated to the latest packages versions
- created cluster on test1
- test2 and test3 joined cluster through test1
- VM test4 installed with proxmox 6.0, latest updates etc.
- no VM or CTX added on cluster, no HA, just freshly installed proxmox 5 and 6

first test scenario:

0. tried to join test4 (pmx6) to cluster, error 400 (schema) as expected
1. upgraded to corosync3 on test1-3 nodes, all ok
2. upgraded test1 to proxmox 6, all ok
3. !!! joined test4 (proxmox 6) through test1, joined ok, pvecm status show all 4 servers in cluster, no error found so far

second test scenario:

1. shutdown test3 (still proxmox5)
2. removed test3 from cluster (pvecm delnode test3)
3. installed fresh proxmox6 on test3, updated to latest
-. try to join the cluster through test2 (proxmox 5), same 400 schema error, as expected
4. successfully joined the cluster through test1 (proxmox6), pvecm status show all 4 servers in cluster, no error so far

so, in this virtual test configuration it seems to be possible to join a proxmox 6 machine in a proxmox 5/6 cluster, as long as joining-aiding machine is upgraded to proxmox 6
also, removing / reinstalling / joining an existing machine to the cluster seems to be working
as a particularity, joining-aided machine was the first proxmox 5 machine in cluster (the cluster was created on that machine); don't know if that matters

and now the real question: what can go wrong in real world when vms and ctxs are created, storages are configured and so on (particulary no ceph involved) ?
I know it's not the recommended procedure, but in test lab seems to be somehow working.
 
- 3 VMs with 2 eth (first one configured),
is the second interface used in the scenario below? (could not see it in the post...)

Apart from that - the explained situations look like they should work (which luckily matches with your tests :)

However I'm not quite sure why you would like to join a new node to a cluster while you're upgrading? (if you need to the first scenario probably works)
It's definitely a path which is less well tested (because used far more seldomly) than joining a new node while all nodes have the same version - which always has the potential for unexpected errors popping up.

I guess I would rather:
* either delay the joining of a fourth node until the whole cluster is on PVE 6 (if you don't need the resources while upgrading)
* join the fourth node while all nodes are on the latest 5.4 - and follow the upgrade instructions (https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0)

and now the real question: what can go wrong in real world when vms and ctxs are created, storages are configured and so on (particulary no ceph involved) ?
during the upgrade from 5 to 6 the corosync version changed - which means that while upgrading there might be times when you don't have quorum (upgrading corosync 2 -> 3 on pve 5.4, and again a restart while upgrading from stretch to buster),
which can lead to those actions (create guests, add storages) to fail.

I personally try not to change anything while upgrading (apart from the upgrade :) - since if there is a problem I don't need to rule out the extra change as source of the problem


I hope this helps!
 
the second interface was not used in this scenario (not even configured), it was put there because every server has now at least 2 (mostly 4) NICs included; but as long as there was only ring0 addresses involved in tests, it shouldn't matter, maybe in some further test scenarios

like I've said before, I always prefer to reinstall rather then in-place upgrade (as long as it is possible, of course :-P); it's cleaner (like trashing the "garbage"), without "subtle differences" between servers, and also gives you the possibility to make some hardware changes if needed (like: bigger disks, adding more disks when autoresize is not an option, change raid layout etc.; or maybe even replace the server with a newer/better one)

indeed, first scenario it's a little strange and somehow useless; but if you cannot join a fresh pmx6 in a mixed pmx5/6 environment then it's almost impossible to delete a node, reinstall it and re-add to the cluster; it may help someone who wants to take the opportunity and upgrade the server too (as in: new, more powerful server or smth)

the second scenario is the actual test, like os upgrade by reinstalling the server; some may argue it's a more difficult path, but the time required for reinstall will be probably almost the same (assuming that in both cases you need to "move" the vm/ctx from one server to another, and also you are prepared with some server configuration manager, i.e. ansible, chef, etc.)

indeed, any sane man will not (willingly) make any big changes during the upgrade phase (especially when comes to storage modification), but there will be:
- moving VMs between servers (most probably online)
- creating / deleting VMs or CTXs when cluster is in interim mode (mixed 5/6) - if cluster upgrade window time is too big and those operations cannot be avoided
of course, any sane person will try to avoid to make modifications exactly when an actual server is upgrading, but still there will be an 'interim' window with mixed 5/6 server

in conclusion, I read and understood the recommended upgrade path from 5 to 6, I'm just trying to understand / imagine what can go wrong with "delete node > reinstall > join" scenario in a real world environment. Like I said, test on a fresh installed virtual cluster was OK, but as we know, from lab to real sometimes it's a long and dangerous journey :-P.

thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!