Q: Upgrade Proxmox CLUSTER 4.0 to 4.4 (latest)

fortechitsolutions

Renowned Member
Jun 4, 2008
449
51
93
Hi, I have a bit of a question, wonder if anyone can comment. I've got a 3-node Proxmox cluster that is running version 4.0.57 as the 'before' state. Tonight I planned to do updates and reboots on all nodes, as follows;

- live migrate VMs off Node3->Node2
- apt-get update; apt-get dist-upgrade on node 3; reboot
- migrate VMs from Node2->Node3
- do upgrade there
- finally migrate VMs node1 ->Node2 and then patch and reboot node1.
- finally re-balance VMs across nodes as per original allocation.

Only wrinkle. Node3 was updated and rebooted, without error. It is now running Proxmox latest (4.4) and when I try to migrate VM from Node2 (ver 4.0) to node3 (ver 4.4) it returns an error, as per:

Code:
Jan 25 20:18:09 starting migration of VM 106 to node 'pve3' (10.82.141.23)
Jan 25 20:18:09 copying disk images
Jan 25 20:18:09 starting VM 106 on remote node 'pve3'
Jan 25 20:18:12 ERROR: online migrate failure - unable to detect remote migration address
Jan 25 20:18:12 aborting phase 2 - cleanup resources
Jan 25 20:18:12 migrate_cancel
Jan 25 20:18:13 ERROR: migration finished with problems (duration 00:00:05)
TASK ERROR: migration problems


Similarly, if I try to test sanity, and migrate a (Powered off VM) from Node3 to Node2, it gives me this error,
Code:
Virtual Environment 4.4-5/c43015a5
Virtual Machine 104 ('MIGRATED--gpli-vsp2---prod-db' ) on node 'pve3'
Status
stopped
Managed by HA
No
Node
pve3
CPU usage
0.00% of 4 CPU(s)
Memory usage
0.00% (0 B of 32.00 GiB)
Bootdisk Size
500.00 GiB
IP is 10.82.141.34
CentOS Linux 6.X
Migrated Nov-4-15 TDC to ProxAB Cluster
Logs
()
ERROR: unknown command 'mtunnel'
USAGE: pvecm <COMMAND> [ARGS] [OPTIONS]
       pvecm add <hostname> [OPTIONS]
       pvecm addnode <node> [OPTIONS]
       pvecm create <clustername> [OPTIONS]
       pvecm delnode <node>
       pvecm expected <expected>
       pvecm keygen <filename>
       pvecm nodes
       pvecm status
       pvecm updatecerts  [OPTIONS]

       pvecm help [<cmd>] [OPTIONS]
Jan 25 20:19:25 ERROR: migration aborted (duration 00:00:00): command '/usr/bin/ssh -o 'BatchMode=yes' root@10.82.141.22 pvecm mtunnel --get_migration_ip' failed: exit code 255
TASK ERROR: migration aborted


I'm wondering, if there is a proper procedure which allows me to do this upgrade without problem?
-- Do I need to power off VMs, upgrade the proxmox hosts, and once all the Proxmox hosts are up to same level, then Migrations will keep working again // and presently that is impossible with current difference of Proxmox version ?

-- reading the pages in wiki, regarding upgrade; or cluster - doesn't discuss this

-- and digging in forums, I can't find any pointers either.

Right now, if I point my browser admin to IP of host1 or host2 of cluster, I see the Version4.0 Proxmox admin UI (showing 3 nodes); and if I point browser admin to IP of host3 I see version 4.4 webUI on the updated node. All instances, I can see all 3 proxmox nodes listed in the webUI / and the 'health' of the cluster ?seems? ok based on output such as,

Code:
root@pve3:/etc/apt# pvecm status
Quorum information
------------------
Date:             Wed Jan 25 20:23:00 2017
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000003
Ring ID:          1/1044
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.82.141.21
0x00000002          1 10.82.141.22
0x00000003          1 10.82.141.23 (local)
root@pve3:/etc/apt#

which gives proper / expected output across all 3 nodes.
Any help is greatly appreciated.

Thanks,

Tim
 
Don't quote me to this, however I am sure you should do the apt-get dist-upgrade on all nodes, this will not effect any running VM's but bring all Proxmox services and packages up-to-date with each other. You should then try and live migrate the VM's across to the already rebooted node so you can reboot to complete the kernel upgrade.

Updates can be done on a node online without effecting any running VMs, reboot only required for Kernel upgrade.
 
Thanks very much! Greatly appreciated. I've just completed apt-get update; apt-get dist-upgrade on the 'node2' host while it had VMs running; no impact or issue. And now I can live-migrate the VMs without error over to node PVE3. Once that is done I can carry on, reboot PVE2 node; then rinse-and-repeat the process for my remaining PVE1 node.

Thanks for the help!

Tim
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!