Hi everyone,
I read all of the relevant documentation and quite a few forum posts regarding upgrading from 6.x to 7.x. I know the recommended approach is generally to back up the VM, copy it to the new host/cluster, and restore, but I think I settled on a faster/better migration process (for my specific application/need) and wanted to get a sanity check from everyone on the Proxmox Forum.
Current production 6.x cluster has shared storage on a separate ZFS device (FreeNAS). VMs are stored on the ZFS device, Proxmox communicates over iSCSI.
I installed Proxmox 7.4 on two new/spare servers, created a new cluster on those servers, attached those to the same ZFS device via iSCSI.
Since both clusters are seeing the same VM disks on the same storage, I tested a migration by snapshotting an offline VM (just in case), copied the Conf file for this VM to a server on the new cluster (via the /etc/pve/qemu-server folder), the VM appeared and booted properly on the new cluster.
Everything seems awesome with this, and the outage/downtime is very quick for the VM. Shut it down, copy the conf file, boot it up.
Does anyone see a potential issue with this plan? I have manually replicated the entire configuration between the clusters to ensure things are the same, including firewall, etc. I wanted to ask here just as confirmation that I'm not missing anything.
Side note: I then rolled back the migration test by deleting the config file on the new cluster, then reverting to and deleting the storage snapshot.
Thoughts? Thanks!
-Todd
I read all of the relevant documentation and quite a few forum posts regarding upgrading from 6.x to 7.x. I know the recommended approach is generally to back up the VM, copy it to the new host/cluster, and restore, but I think I settled on a faster/better migration process (for my specific application/need) and wanted to get a sanity check from everyone on the Proxmox Forum.
Current production 6.x cluster has shared storage on a separate ZFS device (FreeNAS). VMs are stored on the ZFS device, Proxmox communicates over iSCSI.
I installed Proxmox 7.4 on two new/spare servers, created a new cluster on those servers, attached those to the same ZFS device via iSCSI.
Since both clusters are seeing the same VM disks on the same storage, I tested a migration by snapshotting an offline VM (just in case), copied the Conf file for this VM to a server on the new cluster (via the /etc/pve/qemu-server folder), the VM appeared and booted properly on the new cluster.
Everything seems awesome with this, and the outage/downtime is very quick for the VM. Shut it down, copy the conf file, boot it up.
Does anyone see a potential issue with this plan? I have manually replicated the entire configuration between the clusters to ensure things are the same, including firewall, etc. I wanted to ask here just as confirmation that I'm not missing anything.
Side note: I then rolled back the migration test by deleting the config file on the new cluster, then reverting to and deleting the storage snapshot.
Thoughts? Thanks!
-Todd