Upgrading PVE 5 to 6

praenuntius

Active Member
May 19, 2017
6
1
43
26
Milwaukee, WI
clevelandcoding.com
Hello folks,

I keep putting off posting this but I'm looking for some advise on upgrading from PVE5.4-13 to PVE6.x I'm running a 3-5 node cluster where 3 nodes are all flash CEPH backed machines and another two machines are non-ceph game server hosting machines. The CEPH machines communicate with 2x 10gb (1 for vm traffic and 1 for storage network) which works fine but as far as I can tell neither UniFi 16XG and openvswitch packages support multicast so when I made the cluster years ago I configured it to use udpu for Corosync 2.0 communication. When I run pve5to6 I get :
Checking totem settings..
FAIL: Corosync transport explicitly set to 'udpu' instead of implicit default!
PASS: Corosync encryption and authentication enabled.

INFO: run 'pvecm status' to get detailed cluster status..

= CHECKING INSTALLED COROSYNC VERSION =

FAIL: corosync 2.x installed, cluster-wide upgrade to 3.x needed!

From what I've read it doesn't even look like Corosync 3.x uses multicast any longer so what I'm wondering is there a safe way to upgrade to PVE6 while using udpu despite the warnings in the pve5to6 tool?

Thanks for reading and thanks in advance for any answers.
 
you'd need to modify your configuration after upgrading to Corosync 3.x as per https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 .

if your cluster is not quorate (likely), you'll need to
  • modify the configuration manually to switch to the default transport 'knet' (copy to some local path, edit and bump version)
  • then stop corosync and pve-cluster on all nodes
  • then start pmxcfs in local mode (pmxcfs -l) on all nodes
  • then copy your modified configuration to /etc/corosync/corosync.conf and /etc/pve/corosync.conf on all nodes
  • then stop the local mode pmxcfs instances (killall pmxcfs or C^C in the terminal where they are running
  • then start corosync and pve-cluster again
  • verify their status with systemctl status pve-cluster corosync and pvecm status
  • re-run pve5to6
these steps should be done after upgrading to Corosync 3.x, while the HA services are stopped (to prevent fencing). you should not do any other modifications (guest creations, backups, migrations) during this process.
 
Hi,
I was running corosync2 with transport:udpu too.

The more easy way, before upgrading all node to corosync3, remove "transport:udpu" from corosync.conf but don't increase the config_version.
(so no reload of corosync2 in multicast mode)


then upgrade all your proxmox5 nodes to corosync3. (new nodes on corosync3 will have the same config version, with new transport knet, but don't talk to corosync2 nodes still running with udpu.)

after that, upgrade to proxmox6 each nodes

after that, upgrade your ceph.
 
  • Like
Reactions: fabian
Hello,

Thanks for your tip spirit, It worked well for me.
Before /var/log/syslog used to contain lines like this :
Code:
(...) corosync[1785]: notice  [TOTEM ] Initializing transport (UDP/IP Unicast).
(...) corosync[1785]:  [TOTEM ] Initializing transport (UDP/IP Unicast).
(...) corosync[1798]: notice  [TOTEM ] Initializing transport (UDP/IP Unicast).
(...) corosync[1798]:  [TOTEM ] Initializing transport (UDP/IP Unicast).

And just after completing "always upgrade to Corosync 3 first" step by using your tip :
Code:
# (...) corosync[8723]:   [TOTEM ] Initializing transport (Kronosnet).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!