[SOLVED] replacing server hardware and moving zfs disks issue

RobFantini

Famous Member
May 24, 2012
2,080
116
133
Boston,Mass
Hello
I searched I think i saw a this issue reported before

we are replacing server hardware but moving over storage to new systems.

the 1st system had a single disk ext4 pve system and all went well.

the next one has zfs raid-1 . and will not boot. instead a uefi shell appears..

could someone point me to the solution? I vaguely remember seeing a note on old zfs set ups having an issue. [ i searched but no luck ] . thanks.
 
Last edited:
Was that pool maybe running on a server using BIOS (so grub bootloader used) and now it is using UEFI (so systemd boot is required)?
 
You could look if that mainboards UEFI supports "Compatibility Support Module" (CSM) and enable itso be able to boot from grub. Otherwise you would need to create a ESP on the old server, replace the bootloader (PVE 6.4 and PVE7.0 comes with the proxmox boot tool) and hope that it will now work with UEFI. Or just reinstall PVE on the new server using UEFI.
 
thanks for the responses.

due to the systems getting installed way back, there is not way to add a 512M partition . so we will reinstall.

PS: adding a node to cluster and setting up ceph are much easier then before. the documentation and gui made it very easy.