Physically move ZFS pool to different cluster member

Gecko

Active Member
Apr 10, 2018
18
1
43
45
Colorado
I need some advice on how to complete this task.

I have two a two server cluster: Server A and Server B.

Server A is getting a little old and requires some hardware/firmware maintenance, so I need to take it offline for an unknown amount of time. Server A currently has an old ZFS pool of nine 8TB disks that holds all of my VMs. I want to migrate all the VMs and physically move the pool disks from Server A to Server B.

I could do the following:
  1. Load Server B with twenty-four 1TB disks, and create a new ZFS pool
  2. Register/publish the new ZFS pool with Proxmox, Server B
  3. Migrate all the VMs from Server A to Server B, using the new ZFS pool on Server B
  4. Unregister the old ZFS pool from Proxmox on Server A
  5. Tear down the old ZFS pool on Server A
  6. Turn off Server A and Server B
  7. Move the disks from Server A into Server B
  8. Turn on Server A and Server B
  9. Recreate the nine 8TB disk ZFS pool in Server B
  10. Migrate all the VM disks/files to the nine 8TB disk ZFS pool in Server B
  11. Remove the twenty-four 1TB ZFS pool from Server B
Or, I could try something like this:
  1. Turn off all VMs on Server A
  2. Export the ZFS pool
  3. Turn off Server A and Server B
  4. Move the disks from Server A to Server B
  5. Turn on Server B
  6. Import the ZFS pool
  7. Turn on Server A
  8. ...and...?
This is where I need your guidance. What would be the correct set of steps to move the VMs, pool, and disks to Server B?
 
The first one will be with less downtime, but a lot of copying and waiting.

The second approach is the fastest and works if the pools have different names, if not, you also need to import the pool from server a on server b with a different name, import it in PVE via storage and change all the vms on that new pool name so that the new pool name is used instead of rpool.
 
Hi,

You could do the second variant. But you must take in account that zfs pool name on A coul not be the same on B. If your pool name is the same on A and B, you could import zfs pool from A in B using a different name. Also as a safty belt you can use zfs create checkpoint on A before export. If you damage your pool then you can revert the A pool . Also before power off the A, I would run a zfs scrub an then create a checkpoint (with all VM / CT stoped, and without any network on) and then export the pool.

After you will see that the A pool is ok on B, then you need to move your VM/CT conf files from node A to node B and modify the pool name if is the case.

Another stupid ideea as I would do it in your case I would create 2 different VM on each node, an I would install zfs on both (it will be slow ... but you can test your migration scenario) using some small files as HDD in the same zfs config. Then put on this fake A server one dataset (as a fake VM/CT). Then try to use this 2 fake VM to see if you migration plan is ok(if you see your A fake pool on B it is Ok). Document any step on paper. Repeat your steps from paper again.

Then be kindly and share your results/ideas. Then go and do all this steps using your real data/servers.

Good luck! I will cross my fingers for you ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!