Need some help coming up with a solution to this problem:
Summary:
Our team recently moved physical locations and due to the way our ISP was setup we could not migrate the address block we had to the new location. I am looking for help in coming up with ideas for migrating our existing VMs under the current cluster (A) to the new cluster (B) with minimal down time.
Misc Specifics:
· Shared gigabit connection averaging ~900Mbps
Network B:
· Dedicated gigabit connection averaging ~950Mbps
What I’ve Tried:
· Making the NFS export temporarily public-access, restricted to only allow access to the address block for cluster A – Failed miserably during a test backup causing one of the nodes to lockup and I had to go physically force a restart.
What I Can’t Do:
· Run a backup on any of the largest VM disks to the physical nodes because it will exceed the available storage
· Add disks to the nodes (2 bays available, 2 bays full /each)
What I Have Available:
· 4TB backup USB drive
· Lots of spare hardware
What I’ve Thought Of:
· Using a small cheap(er) desktop with USB3.0 as an NFS export for cluster A, moving the machines to shared storage, then adding the NFS export into cluster B and following the process in reverse to the cluster B shared storage (NFS export).
· Mounting the USB backup drive to each node individually, backing up the machines, then moving them over one node at a time to reduce downtime.
· Moving the physical machines from location A to location B, replicating the network (sans public facing address block), then trying to change the IP address to the new Corosync network.
Summary:
Our team recently moved physical locations and due to the way our ISP was setup we could not migrate the address block we had to the new location. I am looking for help in coming up with ideas for migrating our existing VMs under the current cluster (A) to the new cluster (B) with minimal down time.
Misc Specifics:
- We have production web hosting running on cluster A that cannot be down for more than a few hours at a time.
- There is not enough space in the existing non-shared storage to run full VM image backups.
- None of the physical machines in A or B support USB 3.0.
- Some of the VMs are running eCommerce platforms with transactional data, so they must be stopped to transfer to prevent loss of order or transaction data.
- 3 Node cluster.
- ~1TB of storage used between all three nodes, mounted directly to the physical machines (non utilizing shared storage features).
- Node 1 – 289GB raw capacity, 197GB used
- Node 2 – 751GB raw capacity, 493GB used
- Node 3 – 309GB raw capacity, 193GB used
- Largest VM disk is 450GB
- Other VM disks range from 120GB-225GB
- 4 Node cluster.
- ~1TB of storage available to the nodes as an NFS export (not making the same mistake twice with having the storage mounted directly to the nodes).
- ~250GB raw capacity on each node, if needed.
· Shared gigabit connection averaging ~900Mbps
Network B:
· Dedicated gigabit connection averaging ~950Mbps
What I’ve Tried:
· Making the NFS export temporarily public-access, restricted to only allow access to the address block for cluster A – Failed miserably during a test backup causing one of the nodes to lockup and I had to go physically force a restart.
What I Can’t Do:
· Run a backup on any of the largest VM disks to the physical nodes because it will exceed the available storage
· Add disks to the nodes (2 bays available, 2 bays full /each)
What I Have Available:
· 4TB backup USB drive
· Lots of spare hardware
What I’ve Thought Of:
· Using a small cheap(er) desktop with USB3.0 as an NFS export for cluster A, moving the machines to shared storage, then adding the NFS export into cluster B and following the process in reverse to the cluster B shared storage (NFS export).
· Mounting the USB backup drive to each node individually, backing up the machines, then moving them over one node at a time to reduce downtime.
· Moving the physical machines from location A to location B, replicating the network (sans public facing address block), then trying to change the IP address to the new Corosync network.
Last edited: