Hello,
I'm trying to think of a best way to upgrade my now outdated v5.4 cluster to v7. At the moment I am thinking of simply reinstalling the nodes to v7 and migrate using VM conf files and shared ceph rbd pool that is used as storage (separate hardware, also still running outdated v5, but that shouldn't be a problem for now?).
Question is, will the shared storage be okay between two clusters as I slowly move the VMs/Nodes to new cluster? I know that some locking is required on shared storage and single cluster usage is recommended. But will it be okay if I do as follows?
* Ceph pool added in both clusters
* Shutdown VM on old cluster
* Add VM on new cluster by copying the VM conf file into new one
* Starting VM on new cluster
* Remove VM conf file on old cluster
* Rinse and repeat till all VMs/Nodes are added into new cluster
EDIT:
Okay. That was a dumb question. I Was just overly cautios that something would break that I didn't know about. Tested in staging env and such approach works fine...
I'm trying to think of a best way to upgrade my now outdated v5.4 cluster to v7. At the moment I am thinking of simply reinstalling the nodes to v7 and migrate using VM conf files and shared ceph rbd pool that is used as storage (separate hardware, also still running outdated v5, but that shouldn't be a problem for now?).
Question is, will the shared storage be okay between two clusters as I slowly move the VMs/Nodes to new cluster? I know that some locking is required on shared storage and single cluster usage is recommended. But will it be okay if I do as follows?
* Ceph pool added in both clusters
* Shutdown VM on old cluster
* Add VM on new cluster by copying the VM conf file into new one
* Starting VM on new cluster
* Remove VM conf file on old cluster
* Rinse and repeat till all VMs/Nodes are added into new cluster
EDIT:
Okay. That was a dumb question. I Was just overly cautios that something would break that I didn't know about. Tested in staging env and such approach works fine...
Last edited: