How do I clone a ZFS dataset to another pool and then re-bed proxmox over there?

stevefan1999

Member
Mar 1, 2020
5
0
6
25
So I have a pretty ridiculous scenario: I've built a miniature custo-xpenology (just like CustoMac in the hackintosh scene), and I also have a "home server", that I used to be using it as NAS node with Proxmox and ZFS installed in it. I then started to shift the focus to running containers in the Proxmox node, and now I even ran Kubernetes in it.

Now, that "home server" is racked up with a RAID5 setup consisting of 4 consumer grade hard drives, all of which are 3TB, totalling 12TB and as a matter of parity, I had 9TB of storage. Luckily, I had 1/3 filled only, and unfortunately one of which is faulted. I also added another 256G consumer grade SSD as a SLOG and a L2ARC backend.

Since I was very disappointed with the performance of RAID5, which is acceptable in the very beginning to solely be a data warehouse whatsoever, but now if I want to run a small-business it's clearly not OK.

I've now considered to really grow some balls and buy 4 second-hand 4TB SAS2 drives (which my motherboard, X9DRL-3F had 8, and I'm still confused at the pinout at this point, why are they seemingly SATA-compatible though?), and I'd wire them up in RAID10, with initially 8TB available, which is well within the size of my old RAID5 rpool. It's dangerous to use second-hand, off the market components, but it's far cheaper compared to buying retail drives, where the cheapest 4TB drive in the market, the Toshiba one is like $73, and the second hand HGST datacenter 4TB salvage rip is just $50.

Here's the problem: how do I do that heart-transferring surgery? I know I can just make those SAS drives the rpool2, I can simply send all my snapshots, which I do every single day, to rpool2. I know I will then have to shut down my "server", use a Proxmox live CD to transport the rest of the rpool to rpool2 incrementally, and then I can finally get the exact clone of rpool.

Problem solved? No, I don't know what to do to clone the GRUB bios (that boots ZFS), nor how does I know how to handle ZFS import, how do I replace mount points (from my observation, it clearly cannot be set recursively, which is another level of pain given I have 1636 of them rn).

My ultimate goal is to boot from the RAID10, like yesterday, like nothing happened. I know this is not the kind of place to beg for help but please, is there any advice to this kind of full-transportation surgery? I regret not having Proxmox installed in another independent drive that could have relived this problem (currently driving me nuts) substantially.
 
If you are regretting that you haven't used a dedicated boot device, why not implementing it now?
An 8GB industrial grade SSD works just fine (for me) and it also offers a quick recovery-procedure, by being able to image it down to an image-file.
I purchased multiple 8GB SSDs on Ebay for 6€ each to implement exactly that.

Everything else should be fairly easy in my opinion.
As you said:
- Move your data from zfs-pool to zfs-pool via zfs send / zfs receive
- do not forget to clear the snapshots on your receiving end. If you have a lot of ZFS datasets you might want to use a script (I have borrowed this some place ...):
Code:
#!/bin/bash
for snapshot in `sudo zfs list -H -t snapshot | cut -f 1`
    do
        sudo zfs destroy $snapshot
done
- export your original pool
- export your new pool
- import your new pool with the name of the old pool - when all datasets have kept their names your CT and VM-config should just work out of the box.
- remove the old pool disks from your system. otherwise you will get trouble on automatic import, as they share the same name. You can circumvent this by first importing the original pool with an alternate name. Not doing so caused me a lot of trouble afterwards, as I have re-attached the disks to "get something over I forgot".

Side-Note: IMHO you should not (and I mean really not) use old HDDs for production grade work unless you know how they have been dealt with. You may pay a high price by doing so (in terms of trouble that arises). New gear can have its challenges as well, but at least the stuff is under warranty.

HTH
 
  • Like
Reactions: BMkPCFBgMos

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!