[SOLVED] PVE 9 Single Server => New Hardware. Is there a hardware migration guide?

hive

Member
Mar 6, 2021
11
2
23
59
I need to migrate a single PVE9 server to new hardware. I have a mirrored NVMe zfs boot drive and 2 x U.2 data drives + spinning rust drives for VMs and Data.
What would be the simplest way to migrate this systems to 2 x new NVMe drives (same size but new, faster w/higher TBW specs)?
I tried just using dd to clone the NVMe drives to the new ones, and putting them in the new server, but the system did not come up. It failed on importing the rpool, and trying to force an import (pool import pool -f) results in "cannot import 'rpool': I/O error. Destroy and re-create the pool from a backup source.
I have PBS running, so I could just restore the VMs, but there is the PVE datacenter config I was hoping to migrate vs. recreate on the new server.

I've hunted around this forum and elsewhere on the inter webs and have not found a set of steps that work for me (maybe operator error?).

Does anyone have a set of steps they can point me to that have worked to migrate to new hardware, and have maintained as much of the PVE config as possible?
 
My guess is that doing a fresh install of PVE on the new drives and reimporting the config and guests would be a better approach, but you could also replace one NVMe at a time by failing and replacing each drive sequentially. You'd need to copy the partition table, let the ZFS mirror rebuild, and reconfigure the bootloader, but it should work.
 
@gridiron By "re-importing the config", do you mean copying over everything from /etc/pve from the old machine to the new one, or is there more to it?
 
Copying over relevant configs (/etc/pve and whichever others you've modified and want to retain), reinstalling any packages added manually, etc.
 
hi hive,

does the server also have zfs storage for the VMs?
In that case reinstall the new Server with a fresh image, and create a ZFS-Replication for the Guests.
https://pve.proxmox.com/wiki/Storage_Replication

Via clI you can also try qm remote-migrate
Or like you mentioned Backup + Restore.

For the datacenter config, just try copying /etc/pve/datacenter.cfg to the new host.
(It might be unavailable on the new host, because if I remember correctly, that file will only be created after the first modification of the default values)
But maybe you are interested in more configurations like Backup schedules etc. ?

BR, Lucas
 
@bl1mp Essentially what I am trying to do is replace the motherboard and boot NVMe ZFS mirror with a faster motherboard+CPU+RAM, and faster+more robust NVMe drives, and then leverage the existing U.2 storage, SAS-controller and spinning rust drives (all ZFS). So swapping the mobo+CPU+RAM would utilize directly the controllers and drives that are currently running in the existing, slower, system.
And yes, I'd like to get all config, backup schedules, etc.
 
I need to migrate a single PVE9 server to new hardware.
Funny, I have just a similar requirement. Would like to put all disks, rust & nvme, which make a zfs pool PVE is booting from, to a different machine (other cpu, board, etc.). I guess it's probably not working and safer to "simply" reinstall / reconfig everything. :confused:
 
Hi hive,

ok so I think it would be useful to have the output of the boot attempt and maybe the exact syntax of the dd command that had been used. Just to get sure, was the system bootet from a Liveiso (e.g. via USB-Stick) during the dd-copy?

There are no easy backup and recovery tool of the pve hosts, I am aware of.
Maybe you can just add the new NVME Disks to the zfs-RAID for mirroring the boot and system partition. That could also be a possibility, but I am not working that often with ZFS, so I would need to lookup the exact steps, too.

So for me reinstalling would be the faster way.
I would expect it does not take to long to reconfigure the new host. Maybe it helps to have a runbook in advance, to avoid missing configuration items. If it helps, you can post it below for a review.

One thing you should be aware of, ist that your VMs might experience issues, when you replace the CPU. This is depending on the CPU type configured in the VMs Hardware. Just mentioned that if you are bound to downtime limitations. If the VMs had been installed with a CPU type of qemu or x86-64-v2-AES, which is or had been the default type, that will be fine.
For type host, you should compare the capabilities of the old and new CPU.

BR, Lucas
 
Here's what I did in the end:
created an ext4 USB thumb drive, then mounted it on the old machine at /mnt/usb
Old Machine:
mount usb thumb drive on /mnt/usb and tar key config files to thumb drive:

Code:
mount /dev/sdp /mnt/usb
cd /
tar czf /mnt/usb/pve.tar.gz ./etc/pve/nodes/* ./etc/pve/datacenter.cfg ./etc/pve/user.cfg ./etc/pve/storage.cfg ./etc/pve/jobs.cfg ./etc/hosts ./etc/resolv.conf ./etc/passwd ./etc/shadow ./etc/group ./etc/apt/sources.list.d/*

New Machine:
Installed PVE9 using ZFS mirror with 2 new NVME drives--I set it up and got it working on the bench before swapping motherboards in my rack.
mount thumb drive to /mnt/usb and untar over new install:

Code:
cd /
tar xf /mnt/usb/pve.tar.gz

Then on old machine export all the zpools
Swapped motherboards, boot the new machine, import all zpools:
Code:
zpool import -a

That seems to have resulted in a working setup on the new hardware.
The one thing I had to manually configure is postfix to send email, but at least I have an ansible script for that.
 
  • Like
Reactions: bl1mp
Maybe you also want to compare:
Code:
/etc/network/interfaces
/etc/network/interfaces.d/*
To find artefacts in the network config, that might cause connectivity issues in the VMs.

And there might be some more,
depending on how custom the config of the previous PVE server has been.

BR, Lucas
 
Yes, @bl1mp , the fresh install created a working /etc/network/interfaces file that I used to start with (when I did the bench-based setup of the new motherboard, I only connected the 10Gb NIC that I wanted used, to my switch, and it was auto-selected). After swapping motherboards, I updated the settings (in /etc/network/interfaces) to use the static IP that my old server was using, so after making that change it was a transparent replacement as far as the rest of my infrastructure was concerned.
 
  • Like
Reactions: bl1mp