So I have a pretty ridiculous scenario: I've built a miniature custo-xpenology (just like CustoMac in the hackintosh scene), and I also have a "home server", that I used to be using it as NAS node with Proxmox and ZFS installed in it. I then started to shift the focus to running containers in the Proxmox node, and now I even ran Kubernetes in it.
Now, that "home server" is racked up with a RAID5 setup consisting of 4 consumer grade hard drives, all of which are 3TB, totalling 12TB and as a matter of parity, I had 9TB of storage. Luckily, I had 1/3 filled only, and unfortunately one of which is faulted. I also added another 256G consumer grade SSD as a SLOG and a L2ARC backend.
Since I was very disappointed with the performance of RAID5, which is acceptable in the very beginning to solely be a data warehouse whatsoever, but now if I want to run a small-business it's clearly not OK.
I've now considered to really grow some balls and buy 4 second-hand 4TB SAS2 drives (which my motherboard, X9DRL-3F had 8, and I'm still confused at the pinout at this point, why are they seemingly SATA-compatible though?), and I'd wire them up in RAID10, with initially 8TB available, which is well within the size of my old RAID5 rpool. It's dangerous to use second-hand, off the market components, but it's far cheaper compared to buying retail drives, where the cheapest 4TB drive in the market, the Toshiba one is like $73, and the second hand HGST datacenter 4TB salvage rip is just $50.
Here's the problem: how do I do that heart-transferring surgery? I know I can just make those SAS drives the rpool2, I can simply send all my snapshots, which I do every single day, to rpool2. I know I will then have to shut down my "server", use a Proxmox live CD to transport the rest of the rpool to rpool2 incrementally, and then I can finally get the exact clone of rpool.
Problem solved? No, I don't know what to do to clone the GRUB bios (that boots ZFS), nor how does I know how to handle ZFS import, how do I replace mount points (from my observation, it clearly cannot be set recursively, which is another level of pain given I have 1636 of them rn).
My ultimate goal is to boot from the RAID10, like yesterday, like nothing happened. I know this is not the kind of place to beg for help but please, is there any advice to this kind of full-transportation surgery? I regret not having Proxmox installed in another independent drive that could have relived this problem (currently driving me nuts) substantially.
Now, that "home server" is racked up with a RAID5 setup consisting of 4 consumer grade hard drives, all of which are 3TB, totalling 12TB and as a matter of parity, I had 9TB of storage. Luckily, I had 1/3 filled only, and unfortunately one of which is faulted. I also added another 256G consumer grade SSD as a SLOG and a L2ARC backend.
Since I was very disappointed with the performance of RAID5, which is acceptable in the very beginning to solely be a data warehouse whatsoever, but now if I want to run a small-business it's clearly not OK.
I've now considered to really grow some balls and buy 4 second-hand 4TB SAS2 drives (which my motherboard, X9DRL-3F had 8, and I'm still confused at the pinout at this point, why are they seemingly SATA-compatible though?), and I'd wire them up in RAID10, with initially 8TB available, which is well within the size of my old RAID5 rpool. It's dangerous to use second-hand, off the market components, but it's far cheaper compared to buying retail drives, where the cheapest 4TB drive in the market, the Toshiba one is like $73, and the second hand HGST datacenter 4TB salvage rip is just $50.
Here's the problem: how do I do that heart-transferring surgery? I know I can just make those SAS drives the rpool2, I can simply send all my snapshots, which I do every single day, to rpool2. I know I will then have to shut down my "server", use a Proxmox live CD to transport the rest of the rpool to rpool2 incrementally, and then I can finally get the exact clone of rpool.
Problem solved? No, I don't know what to do to clone the GRUB bios (that boots ZFS), nor how does I know how to handle ZFS import, how do I replace mount points (from my observation, it clearly cannot be set recursively, which is another level of pain given I have 1636 of them rn).
My ultimate goal is to boot from the RAID10, like yesterday, like nothing happened. I know this is not the kind of place to beg for help but please, is there any advice to this kind of full-transportation surgery? I regret not having Proxmox installed in another independent drive that could have relived this problem (currently driving me nuts) substantially.