[SOLVED] Can I mix Promxox 6 and Proxmox 7 in the same Cluster?

Razva

Renowned Member
Dec 3, 2013
252
10
83
Romania
cncted.com
Hello,

I'm adding new servers to my stack and I would like to use Proxmox 7 on all new hardware.

Would it be possible to join the new Proxmox 7 servers on the current/existing Proxmox 6 cluster in order to migrate the VMs, then decommission all (unused) Proxmox 6 servers? Can Promox do mix-and-match between 6 and 7?

Thank you
 
Hi,
Would it be possible to join the new Proxmox 7 servers on the current/existing Proxmox 6 cluster in order to migrate the VMs, then decommission all (unused) Proxmox 6 servers? Can Promox do mix-and-match between 6 and 7?
It is technically possible, the cluster stack from 6.4 and 7.0 are pretty similar, but we recommend against using that longer than required.

Also, note that only forward (live) migrations from 6.4 to the 7.0 server will be supported, you won't be able to (live) migrate anything started on a PVE 7.0 back to the old 6.4 again.
So for a forward migration with new hardware it can be used, but long term mix and match: no.
 
  • Like
Reactions: Spartan67
Hi,

It is technically possible, the cluster stack from 6.4 and 7.0 are pretty similar, but we recommend against using that longer than required.

Also, note that only forward (live) migrations from 6.4 to the 7.0 server will be supported, you won't be able to (live) migrate anything started on a PVE 7.0 back to the old 6.4 again.
So for a forward migration with new hardware it can be used, but long term mix and match: no.
My plan is to basically add the new 7.0 hardware into the cluster, migrate everything from 6 to 7 in a couple of days and the decommission all 6 servers.

Is this an OK-ish plan or would it be better to just setup a new 7 cluster, stop all VMs, move each VM one by one and start them on the new cluster?
 
Is this an OK-ish plan or would it be better to just setup a new 7 cluster, stop all VMs, move each VM one by one and start them on the new cluster?
It can be OK, but I'd highly recommend testing the live-migration first with some unimportant test VM, ideally being similar to what your actual VM workload looks like, to ensure all goes well and that your workload runs OK on the new PVE 7.0 stack.
 
  • Like
Reactions: Spartan67
It can be OK, but I'd highly recommend testing the live-migration first with some unimportant test VM, ideally being similar to what your actual VM workload looks like, to ensure all goes well and that your workload runs OK on the new PVE 7.0 stack.
In this case I think I'll go the safe route. Sure, I have backups, but I think I prefer to work 5 mins more per VM and be safe rather than work 5 mins less and be sorry.

I'll setup a new 7 cluster, stop VM on 6, copy it on 7, start it on 7. If the VM fails on 7 I'll just start it back on 6. Seems better/safer than just going all-in with 6+7.

Sounds like a plan?
 
Sound like a plan, I'd still go for a simple test with a similar-ish VM created on 6.4 and then moved to 7.0 as you plan to do for the production ones, IMO that has no drawbackand can only help.
 
  • Like
Reactions: Razva
Sound like a plan, I'd still go for a simple test with a similar-ish VM created on 6.4 and then moved to 7.0 as you plan to do for the production ones, IMO that has no drawbackand can only help.
Ok, I'll do that as well and get back with some feedback.

PS: thank you for being so nice, you are one of the most helpful/polite Proxmox reps I've ever encountered on this forum.
 
Just my experience in this :
Running a mixed cluster of both (latest) 6.4.x and 7.0.x nodes, the cluster functions for as far as i have been able to determine.

Noticable :

An online migration of a VM from a 6.4.x node to a 7.0.x node ( with shared storage) opens a tunnel to the target-node and it starts a transfer over SSH where the performance is ( to say atleast) very poor.
The transfer on average wat 5-7 Mb/sec.
Comparing to a transfer from a 7.0.x node to a 7.0.x node ( equal version ) it was 112 Mb/sec
 
Last edited:
Just my experience in this :
Running a mixed cluster of both (latest) 6.4.x and 7.0.x nodes, the cluster functions for as far as i have been able to determine.

Noticable :

An online migration of a VM from a 6.4.x node to a 7.0.x node ( with shared storage) opens a tunnel to the target-node and it starts a transfer over SSH where the performance is ( to say atleast) very poor.
The transfer on average wat 5-7 Mb/sec.
Comparing to a transfer from a 7.0.x node to a 7.0.x node ( equal version ) it was 112 Mb/sec


Glowsome,
Any chance you can give an idea of what you were getting for live migrations from 6.4 to 6.4 nodes prior to the upgrade ?
 
Glowsome,
Any chance you can give an idea of what you were getting for live migrations from 6.4 to 6.4 nodes prior to the upgrade ?
As far as i remember, and i must admit its been a long time since i had migrated a VM, its never been this low, however exact fugures i do not have.
I mean the cluster nodes are all connected to 1Gb ethernet ( no separate net for cluster comms) and all have double ( failover config) net
 
  • Like
Reactions: lowerym
An online migration of a VM from a 6.4.x node to a 7.0.x node ( with shared storage) opens a tunnel to the target-node and it starts a transfer over SSH where the performance is ( to say atleast) very poor.
The transfer on average wat 5-7 Mb/sec.
Comparing to a transfer from a 7.0.x node to a 7.0.x node ( equal version ) it was 112 Mb/sec
Note, both PVE 6.x and 7.0 use the SSH tunnel by default for secure migration traffic (to avoid exposing your VM's memory with all its secrets to the whole network), so there nothing changed.

FWIW, I run a three node (all older E5-2620 v3) for test purpose with one node upgraded to 7.0 and the other ones still at 6.4 to test upgrades and general issues with new vs. old versions. I saw no such regression there, so this seems rather like another configuration and/or HW (regression) issue for your specific setup.

For example an excerpt of a migration task log from old 6.4 -> new 7.0, this using a 10G network thats also a quite active ceph cluster network for migration:
Code:
2021-06-23 12:19:59 start migrate command to unix:/run/qemu-server/136.migrate
2021-06-23 12:20:00 migration active, transferred 457.0 MiB of 2.0 GiB VM-state, 525.3 MiB/s
2021-06-23 12:20:01 migration active, transferred 964.3 MiB of 2.0 GiB VM-state, 500.3 MiB/s
2021-06-23 12:20:02 migration active, transferred 1.4 GiB of 2.0 GiB VM-state, 531.8 MiB/s
2021-06-23 12:20:03 average migration speed: 516.2 MiB/s - downtime 97 ms
 
Last edited:
  • Like
Reactions: Razva
I can't migrate from 7 to 6.4 live or offline, get this error:

021-08-01 15:33:34 found local disk 'zfspool:vm-244-disk-0' (in current VM config)
2021-08-01 15:33:34 copying local disk images
2021-08-01 15:33:36 full send of zfspool/vm-244-disk-0@__migration__ estimated size is 17.4G
2021-08-01 15:33:36 total estimated size is 17.4G
2021-08-01 15:33:36 Unknown option: snapshot
2021-08-01 15:33:36 400 unable to parse option
2021-08-01 15:33:36 pvesm import <volume> <format> <filename> [OPTIONS]
2021-08-01 15:33:37 command 'zfs send -Rpv -- zfspool/vm-244-disk-0@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2021-08-01 15:33:37 ERROR: storage migration for 'zfspool:vm-244-disk-0' to storage 'zfspool' failed -
 
I can't migrate from 7 to 6.4 live or offline, get this error:

021-08-01 15:33:34 found local disk 'zfspool:vm-244-disk-0' (in current VM config)
2021-08-01 15:33:34 copying local disk images
2021-08-01 15:33:36 full send of zfspool/vm-244-disk-0@__migration__ estimated size is 17.4G
2021-08-01 15:33:36 total estimated size is 17.4G
2021-08-01 15:33:36 Unknown option: snapshot
2021-08-01 15:33:36 400 unable to parse option
2021-08-01 15:33:36 pvesm import <volume> <format> <filename> [OPTIONS]
2021-08-01 15:33:37 command 'zfs send -Rpv -- zfspool/vm-244-disk-0@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2021-08-01 15:33:37 ERROR: storage migration for 'zfspool:vm-244-disk-0' to storage 'zfspool' failed -
Make a full VM backup on PVE6, copy the backup file to PVE7, restore the backup file on PVE7.
 
Last edited:
Hi,

It is technically possible, the cluster stack from 6.4 and 7.0 are pretty similar, but we recommend against using that longer than required.

Also, note that only forward (live) migrations from 6.4 to the 7.0 server will be supported, you won't be able to (live) migrate anything started on a PVE 7.0 back to the old 6.4 again.
So for a forward migration with new hardware it can be used, but long term mix and match: no.
in a cluster from 6.4 I added a node from 7
then merged the virtual machine from 6.4 to 7 and back
there were no problems with migration
the virtual machine was online
 
We observed the same behavior here : VMs can be live-migrated from PVE6 to PVE7 and back AS LONG AS THEY'VE NOT BEEN STARTED ON A PVE7 node !
You can't, for example, start a VM on a PVE7 node and live-migrate it to PVE6, AFAIK that's the only limitation.

Note : the VM won't crash, it will simply display an error about version, that's still safe.
 
For the records : we encountered another limitation today.
If you're using 'storage replication' between 2 nodes, sync from PVE7 To PVE6 node will fail with an 'Unknown option: snapshot'.
The '-snaphost' parameter has been added to pvesm in PVE7 and used to sync by PVE7.

No really a big deal but it should be took into consideration when planing a cluster in-place upgrade.
 
Hello,

I'm adding new servers to my stack and I would like to use Proxmox 7 on all new hardware.

Would it be possible to join the new Proxmox 7 servers on the current/existing Proxmox 6 cluster in order to migrate the VMs, then decommission all (unused) Proxmox 6 servers? Can Promox do mix-and-match between 6 and 7?

Thank you
I've not tested moving of Virtual machines, but I've tested moving LXC containers, and it works in 50%:

Migration LXC containers from 6.4 to 7.2 works fine, but migration back the same LXC container ( turnkey openVPNServer, which doesn't work on 7.2 host ) from 7.2 to 6.4 failed with:


2022-05-05 20:58:34 starting migration of CT 101 to node 'pve' (192.168.10.10)
2022-05-05 20:58:34 found local volume 'local-zfs:subvol-101-disk-0' (in current VM config)
2022-05-05 20:58:38 Unknown option: snapshot
2022-05-05 20:58:38 400 unable to parse option
2022-05-05 20:58:38 pvesm import <volume> <format> <filename> [OPTIONS]
2022-05-05 20:58:38 full send of rpool/data/subvol-101-disk-0@ok estimated size is 691M
2022-05-05 20:58:38 send from @ok to rpool/data/subvol-101-disk-0@__replicate_101-0_1651693502__ estimated size is 3.08M
2022-05-05 20:58:38 send from @__replicate_101-0_1651693502__ to rpool/data/subvol-101-disk-0@__migration__ estimated size is 5.78M
2022-05-05 20:58:38 total estimated size is 700M
2022-05-05 20:58:39 command 'zfs send -Rpv -- rpool/data/subvol-101-disk-0@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2022-05-05 20:58:40 ERROR: storage migration for 'local-zfs:subvol-101-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export local-zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve' root@192.168.10.10 -- pvesm import local-zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __migration__ -delete-snapshot __migration__ -allow-rename 1' failed: exit code 255
2022-05-05 20:58:40 aborting phase 1 - cleanup resources
2022-05-05 20:58:40 ERROR: found stale volume copy 'local-zfs:subvol-101-disk-0' on node 'pve'
2022-05-05 20:58:40 start final cleanup
2022-05-05 20:58:40 ERROR: migration aborted (duration 00:00:06): storage migration for 'local-zfs:subvol-101-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export local-zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve' root@192.168.10.10 -- pvesm import local-zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __migration__ -delete-snapshot __migration__ -allow-rename 1' failed: exit code 255

TASK ERROR: migration aborted
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!