Hello all,
We are currently running a Proxmox cluster consisting of 12 nodes with PVE 6.4 (latest patches) and local ZFS storage.
In the future, the PVE cluster will be fully connected using Ceph storage.
To migrate the infrastructure to the new PVE version 7.3, we wanted to use our external Ceph storage to offload the VM data (separate PVE 7.3 cluster with Ceph Quincy).
Since the RBD integration of the Quincy Ceph does not work on the current, old Proxmox 6.4 nodes with version "luminous", we wanted to exchange the data via a 7.3 node as a migration path.
So we would have next to the 12x 6.4 Nodes:
1. Add a thirteenth Node to the cluster
2. Upgrade this Node from 6.4 to 7.3
3. Mount the external Ceph storage only for Node 13
4. Move all VMs one by one from the twelve cluster note (to the local storage) to node 13
5. Move the VM data from the local storage to the shared Ceph storage
6. Upgrade all old nodes from 6.4 to 7.3
7. Share the Ceph storage with all other nodes
8. Distribute VM profiles back to the respective original nodes
9. Start the VMs from the shared Ceph storage on the respective upgraded nodes
Is this a reasonable, feasible approach?
We are currently running a Proxmox cluster consisting of 12 nodes with PVE 6.4 (latest patches) and local ZFS storage.
In the future, the PVE cluster will be fully connected using Ceph storage.
To migrate the infrastructure to the new PVE version 7.3, we wanted to use our external Ceph storage to offload the VM data (separate PVE 7.3 cluster with Ceph Quincy).
Since the RBD integration of the Quincy Ceph does not work on the current, old Proxmox 6.4 nodes with version "luminous", we wanted to exchange the data via a 7.3 node as a migration path.
So we would have next to the 12x 6.4 Nodes:
1. Add a thirteenth Node to the cluster
2. Upgrade this Node from 6.4 to 7.3
3. Mount the external Ceph storage only for Node 13
4. Move all VMs one by one from the twelve cluster note (to the local storage) to node 13
5. Move the VM data from the local storage to the shared Ceph storage
6. Upgrade all old nodes from 6.4 to 7.3
7. Share the Ceph storage with all other nodes
8. Distribute VM profiles back to the respective original nodes
9. Start the VMs from the shared Ceph storage on the respective upgraded nodes
Is this a reasonable, feasible approach?