[Server migration] How should I approach this?

Keeper of the Keys

Active Member
Jul 7, 2021
40
3
28
(Sorry about the vague title I was a bit unsure what to use, even writing this post is brainstorming for me)

I have a single proxmox host in my homelab it has 10 SATA SSDs split as follows:
- Proxmox OS + majority of guest OS disks sit on a 2 device zfs mirror
- Some guest VMs have data living and an 8 device raidz2

I bought a motherboard to upgrade this server that has the ability to connect 2 nvme and 9 sata so my plan is that the new machine will have its OS on a zfs mirror with the nvme disks.

At the moment my thought process is:
1. Setup new proxmox server on the new MB+2nvme and cluster it with the old machine
2. Migrate all the guests from old to new (they can be off)
3. Remove old from cluster and shut it down
4. Connect raidz2 array to new

Ideally speaking I would like to not migrate the raidz2 data during the process since that would just add writecycles that nobody needed but I'm not sure that is possible

Another option I am toying with (though less charmed by) would be
1. Cleanly shut down old
2. move raidz2 to new
3. connect one of the zfs-mirror devices to the 9th sata port
4. boot from some live media that supports zfs, setup the nvme zfs mirror and rsync the data from the old disk (I don't want to just add the nvme devices to the pool since they are 2T where the old sata drives were 1T)

Any wise words would be appreciated :)
 
Just putting all the old drives on the new motherboard is not possible since it "only" 9 SATA and I am using 10, also as said I saw this as a nice opportunity to upgrade the zfs-mirror used for proxmox and promary storage to nvme.

I could have a degraded OS disk and migrate it to the NVME disks, I don't think that can be by directly leveraging the zfs mirror since the nvme disks are 2T (unless I split the nvme disks in 2 partitiions I guess).
 
At the moment I'm still trying to fix boot issues what I ended up doing so far -

1. Connect old mirror to sata ports of new motherboard
2. Boot ubuntu 25.04 live (just what I happened to have an ISO of)
3. Create GPT partition table and 3 partitions (1M - bios_boot, 1G - EFI, the rest)
3. add nvme devices as new mirror to rpool
4. remove sata mirror from the pool (which causes data to be migrated)

Initially I forgot to copy the contents of the EFI partition which I did after I could not boot
I chrooted into the pve rpool and updated the partition IDs of EFI boot

proxmox rescueboot is incapable of finding the rpool device, not entirely sure where to go from here
 
Please note that I have run https://forum.proxmox.com/threads/proxmox-rescue-disk-trouble.127585/#post-557888

I have also tried to chroot into the resulting mount of rpool and run `proxmox-boot-tool status` and `proxmox-boot-tool refresh` the output as I understand it seems to suggest all is fine.

updateinitiramfs on the other hand complains of a missing loader.conf on the EFI partitions, I verified with the old SATA disks and they also don't have that file.

I created the partition table with gparted, added the flags after.

The TUI debug installer actually claims it can't find compatible disks.

I feel like I missed some step in flagging the disks for the UEFI but unclear to me what.
 
In the end I got it working by from the chroot (which had /sys and /dev bind mounted) reformatting, reinitinlizing and updating the boot partition(s).

I actually had an error with one partition so I need to double check that *both* SSDs actually have working boot partitions but this is already much better than where I was :)