How to upgrade from 4 to 5

Ivan Gersi

Renowned Member
May 29, 2016
83
7
73
54
I have a little problem....I upgraded Cluster several times (from 2 to 3, from 3 to 4) but I`m too old or my brain is failed.
I had pve1 and pve2 in cluster. I migrate all VMs to pve2 or made backup. Now I made fresh install on pve1 (with new disks, new raid volume) 5.2-1. I made a new cluster on pve1 and had old cluster on pve2 (pve2 is still 4.4-13 and I`d like to make fresh install 5.2-1 too).
Now I`d like to migrate all VMs from pve2 to pve1, then install new pve2 and join pve2 to pve1 cluster.
It`s easy....but I have no idea. I can`t make backup from pve2 now (unable to open file '/etc/pve/nodes/pve2/qemu-server/102.conf.tmp.20797' - Permission denied), I can`t del pve1 from cluster, I can`t add new pve1 to cluster (because it has own cluster already and pve2 wait old pve1 not new pve1 (there is the same ip),,,I have no idea what can I do now.
My problem is I need VMs running on pve2 have online (mail, samba server), so I need migrate all VMsto pve1 firstly and then I can upgrade pve2.
 
Hi,
you can't change /etc/pve without quorum (see pvecm status).
You can force quorum with "pvecm expected 1".

But i'm not sure if your way is the best way. Use the search function - spirit has an posting about migrating from pve4 to pve5 online.

Udo
 
I had bad experiences with online upgrading...Debian (I use Debian in my other servers) in generally is strange...sometimes when I upgraded all worked properly, sometimes I had to find solutions for many issues. I think every distro upgrade is hard fight every time.
Yes I know I can change quorum, but I`m afraid to do it in pve2 anythink...maybe I rather del pve1 cluster (there is no VM) and join pve1 to pve2....migrate VMs and thned I can play with pve2.
Can I del old pve1 from pve2 cluster (I think i have to change quorum firstly on pve2) and then add new pve1? I think this is safer way.
 
I had a problem to join new pve1 to pve2. I had to fix/decrease quorum firstly, delete old pve1 from pve2, then use "-use_ssh" in pvecm add on pve1. It was hard fight, now I have pve1 and pve2 in cluster (5.2 vs 4.4-13).
I have the last problem to uderstand why is rest of disk size in lvm thin in default..I`ve read wiki several times but still don`t undestanding;o).
Btw I could see lvm thin na Datacenter before I joined pve1 to cluster, now I can`t see one (was delete from storage.cfg too by some process). Strange.
 
Ok, I`ve found one issue...I can migrate machine from old pve2 to new pve1, but I can`t migrate back.
Code:
root@pve1:/etc# qm migrate 102 pve2
2018-08-01 22:49:27 starting migration of VM 102 to node 'pve2' (192.168.3.78)
2018-08-01 22:49:27 found local disk 'local:102/vm-102-disk-1.qcow2' (in current VM config)
2018-08-01 22:49:27 copying disk images
ERROR: unknown command 'import'
USAGE: pvesm <COMMAND> [ARGS] [OPTIONS]
       pvesm add <type> <storage> [OPTIONS]
       pvesm remove <storage>
       pvesm set <storage> [OPTIONS]

       pvesm alloc <storage> <vmid> <filename> <size> [OPTIONS]
       pvesm free <volume> [OPTIONS]
       pvesm list <storage> [OPTIONS]

       pvesm glusterfsscan <server>
       pvesm iscsiscan -portal <string> [OPTIONS]
       pvesm lvmscan
       pvesm lvmthinscan <vg>
       pvesm nfsscan <server>
       pvesm zfsscan

       pvesm status  [OPTIONS]

       pvesm extractconfig <volume>
       pvesm path <volume>

       pvesm help [<cmd>] [OPTIONS]
command 'dd 'if=/var/lib/vz/images/102/vm-102-disk-1.qcow2' 'bs=4k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2018-08-01 22:49:29 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export local:102/vm-102-disk-1.qcow2 qcow2+size - -with-snapshots 1 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@192.168.3.78 -- pvesm import local:102/vm-102-disk-1.qcow2 qcow2+size - -with-snapshots 1' failed: exit code 255
2018-08-01 22:49:29 aborting phase 1 - cleanup resources
2018-08-01 22:49:29 ERROR: found stale volume copy 'local:102/vm-102-disk-1.qcow2' on node 'pve2'
2018-08-01 22:49:29 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export local:102/vm-102-disk-1.qcow2 qcow2+size - -with-snapshots 1 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@192.168.3.78 -- pvesm import local:102/vm-102-disk-1.qcow2 qcow2+size - -with-snapshots 1' failed: exit code 255
migration aborted
There is no 102 machine on pve2.
Code:
root@pve2:/var# qm status 102
Configuration file 'nodes/pve2/qemu-server/102.conf' does not exist
root@pve2:/var# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       100 posta                running    4096            1420.00 2139
       101 win2008III           running    4096             128.00 25967
       103 win2008IV            running    4096              20.00 2229
       104 ERAS                 running    4096              32.00 13568
 
yes migration is not generally supported from new -> old
we try not to break it on purpose but do not go out of our way to make it possible
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!