Upgrade 2-node cluster from PVE 8.4 to 9 with boot disk change

artur.p

New Member
Sep 3, 2025
2
1
3
Hello !

I plan to upgrade my small PVE cluster to version 9.
My current cluster configuration is:
- 2 nodes, cluster (with qdevice), PVE 8.4
- Each node has 2 disks
- First disk for system (PVE 8.4 installed from ISO)
- Second disk dedicated to ZFS : VMs and containers disks
- Replication of VMs and containers disks are enabled between 2 nodes (on ZFS partition on second disk)

I'd like to replace the system disk on first node and then install PVE9 from scratch on this node. The second node can be upgraded in-place.
I wonder if I can preserve second disk setup (partitions and content) and how to do it ?
Or maybe my question doesn't make sense as PVE9 will restore the partitions and disks of guests on second disk after upgrade on cluster rebuild.

Thank you for your help.

BTW: I suppose the correct way to upgrade in my case is to remove one node from the cluster, install new disk, install PVE9, add the node to the cluster again. Am I right ?
 
So... I upgraded the cluster on my own.

- I moved all my VMs/Containers to the second node
- I followed the guide to remove first node from cluster (be careful as there some steps to follow there)
- Don't forget to remove any replication between nodes
- I replaced the boot disk on my standalone node
- I booted/installed PVE9 from USB stick
- I readded the standalone node to the cluster (Use the Cluster guide to follow all steps)
- Be careful if you have any Qdevice. You probably should remove qdevice from cluster config and readd it again to make it work again.
- Check your config with 'pve8to9 --full' on each cluster node
- I wiped second disk with ZFS partitions (VMs/LXC virtual disks) on my first node and created a new ZFS filesystem on it
- You can rebuild your replication at this step

Cluster worked fine with nodes on mixed PVEs (8 and 9).
I made an in-place PVE upgrade of the second node with no major issues.
Some files must be edited manually (ie. apt sources).
Don't mix apt .list and .sources for the same repositories. You can start with .list files or replace them all with .sources.
After reboot I started to create back Replications. It recreated all missing VMs/LXCs filesystems on wiped disk with fresh ZFS.

I tested some migrations and it was OK.
However, the status icon in GUI of some LXCs was "stopped" even if the container was started.
After few minutes the status changed and everything looked fine.

I hope this helps...
 
  • Like
Reactions: UdoB