Advice on moving PC (debian12.7 root on zfs) to vm

ozpos

New Member
Nov 10, 2024
8
0
1
Could anyone please help (home server pve 8.2.7 zfs two disk mirror) ?

I have created a new vm based on pve8.2 (zfs mirror) since it has pre-rolled zfs. Within the VM I can create and receive the rpool and bpool from the PC.
  1. Do I have to create an efi boot disk when all the boot config is in the bpool ?
  2. Can I use SeaBIOS to mount the bpool/BOOT/debian at /boot and rpool/ROOT/debian at / and thus launch the grub menu ?
Best regards and a big thanks to all those involved,
oz
 
Last edited:
Could anyone please help (home server pve 8.2.7 zfs two disk mirror) ?

I have created a new vm based on pve8.2 (zfs mirror) since it has pre-rolled zfs. Within the VM I can create and receive the rpool and bpool from the PC.
  1. Do I have to create an efi boot disk when all the boot config is in the bpool ?
  2. Can I use SeaBIOS to mount the bpool/BOOT/debian at /boot and rpool/ROOT/debian at / and thus launch the grub menu ?
Best regards and a big thanks to all those involved,
oz
Hi, This is my second thread relating to my question (the first I deleted after 146 views with no replies). Obviously support must go to more mission critical commercial installations so could I ask is there any where else I could go for help ?
 
If your Debian on the physical PC boots in UEFI mode, you'll need OVMF and an EFI disk and you'll probably have to do a boot repair as the boot order and target are stored in EFI variables (in UEFI on hardware and EFI disk on VM).
If your Debian on the physical PC boots in legacy/BIOS mode, then SeaBIOS ought to work fine.
I have no experience with moving Debian on ZFS from a physical to a virtual machine, sorry.
 
Hi, This is my second thread relating to my question (the first I deleted after 146 views with no replies). Obviously support must go to more mission critical commercial installations so could I ask is there any where else I could go for help ?
Please don't make multiple threads for the same question in just a couple of days (as threads cannot be removed/unduplicated). Everybody here is a volunteer and there are many questions each day. If you want official support within a guaranteed time, please buy a subscription with support tickets. I have seen people get answers more quickly on Reddit in the past. There is also a mailing list and serverfault.
 
If your Debian on the physical PC boots in UEFI mode, you'll need OVMF and an EFI disk and you'll probably have to do a boot repair as the boot order and target are stored in EFI variables (in UEFI on hardware and EFI disk on VM).
If your Debian on the physical PC boots in legacy/BIOS mode, then SeaBIOS ought to work fine.
I have no experience with moving Debian on ZFS from a physical to a virtual machine, sorry.
Hi, Thank you very much for your help.

The PC does in fact boot in UEFI node

Do you know how easy it is (or should one even try) to convert from UEFI boot mode to using SeaBIOS/coreboot mode ?

Best regards,
oz
 
Do you know how easy it is (or should one even try) to convert from UEFI boot mode to using SeaBIOS/coreboot mode ?
Windows has a lot of problems with this. I think with Debian/Linux you only have to repair your boot loader (GRUB or something else?) after changing the motherboard (or virtual) boot mode.
 
Thank you. I have no problem transferring the rpool and the bpool from the PC into my vm.

Could you please tell me roughly how I go about populating the virtual efi partition ?
 
Could you please tell me roughly how I go about populating the virtual efi partition ?
I though your PC (and now VM) has no EFI parition and all the necessary boot stuff is on the bpool?
The virtual EFI disk is a place form Proxmox to store EFI variables (which are in CMOS memory on a physical motherboard).
Either way, you might need to do a GRUB boot repair for your VM, which is probably something like: boot from a Debian ISO, chroot into your Debian and grub-install.
Or did I not understand your question?
 
Thank very much you for your help.

To clarify: The PC uses UEFI however I created the vm from pve 8.2 since the kernel has zfs pre-rolled so the vm would not have an efi disk.

Since the PC has deb12.7 on root I can easily send|receive the zfs bpool and rpool directly to the vm. The bpool contains grub and initram kernel pairs etc.

If I can mount the bpool to '/boot' and the rpool to '/' would an update-grub just do the trick ?
 
Thank very much you for your help.

To clarify: The PC uses UEFI however I created the vm from pve 8.2 since the kernel has zfs pre-rolled so the vm would not have an efi disk.
To clarify: EFI disk is a Proxmox specific thing for VMs with UEFI/OVMF and you seem to confuse it with an EFI partition.
Physical systems with UEFI never have EFI disks. They can have an ESP (EFI System Partition, which your PC does not appear to have or need) and CMOS for EFI variables.
VMs withj OVMF can have an ESP (on a virtual disk, but yours won't need it) and they have the EFI variables on the EFI disk (because they don't have CMOS).

Since the PC has deb12.7 on root I can easily send|receive the zfs bpool and rpool directly to the vm. The bpool contains grub and initram kernel pairs etc.

If I can mount the bpool to '/boot' and the rpool to '/' would an update-grub just do the trick ?
Mounting is not enough, you also need to chroot into it. And you might need to run install-grub or grub-install also.
 
Mounting is not enough, you also need to chroot into it. And you might need to run install-grub or grub-install also.
Thank you. 1) That sounds difficult, is it ?

2) The vm is Proxmox VE 8.2 with its own webUI why would update-grub not find the kernels in the bpool ?

Sorry a penny has just dropped, the bpool would not exist however;

3) What if bpool was on the host ?
 
Last edited:
Thank you. 1) That sounds difficult, is it ?
It's not uncommon to have to do it to fix boot problems on physical (or virtual) systems. There are lots of guides on the internet. It's not in any way Proxmox specific (even though there are more than a few threads about this on this forum).
2) The vm is Proxmox VE 8.2 with its own webUI why would update-grub not find the kernels in the bpool ?
Because you have to repair boot loaders from within the system (physical or virtual) that is running as if it started itself. If you know a better way, go right ahead.
 
Thank you but unfortunately I know very little about chroot and what is going on during boot.

Back to the internet it is then.

Thank you very much for your help.

I will check back to see if anyone else can point me at an example of moving a linux PC to pve vm ?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!