How to migrate from legacy grub to UEFI boot (systemd-boot)?

onlime

Renowned Member
Aug 9, 2013
76
14
73
Zurich, Switzerland
www.onlime.ch
On PVE Wiki: Host Bootloader it says:

Systems using ZFS as root filesystem are booted with a kernel and initrd image stored on the 512 MB EFI System Partition. For legacy BIOS systems, grub is used, for EFI systems systemd-boot is used. Both are installed and configured to point to the ESPs.

I still have some Proxmox VE systems which were installed in the old days. Some report:

Bash:
$ parted -l /dev/sda
Model: ATA INTEL SSDSC2KG01 (scsi)
Disk /dev/sda: 1920GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB   fat32              boot, esp
 3      538MB   1920GB  1920GB  zfs

$ proxmox-boot-tool refresh
$ proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
B0E1-0F27 is configured with: grub (versions: 5.15.64-1-pve, 5.15.74-1-pve)
B0E1-9558 is configured with: grub (versions: 5.15.64-1-pve, 5.15.74-1-pve)

so the ESP (uefi) partition seems to be there but it's unused. How to sync it with current kernel versions?

on other systems I get:

Bash:
$ proxmox-boot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

$ proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.

How to fix that?

And finally, on newly installed systems, I get this:

Bash:
$ proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
9301-393C is configured with: uefi (versions: 5.4.103-1-pve, 5.4.106-1-pve), grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)
9301-E9D1 is configured with: uefi (versions: 5.4.103-1-pve, 5.4.106-1-pve), grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)
9302-87DD is configured with: uefi (versions: 5.4.103-1-pve, 5.4.106-1-pve), grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)

$ cat /etc/kernel/proxmox-boot-uuids
9301-393C
9301-E9D1
9302-87DD

but even running proxmox-boot-tool refresh on such a system does not update the horribly outdated kernel versions on the ESPs (uefi). How to fix that?

Is there some straightforward migration path for all these situations, some command that does a real refresh of the ESPs, fixing the above issues? I couldn't find any recommendations in Proxmox VE wiki.

Thanks, Philip
 
I still have some Proxmox VE systems which were installed in the old days. Some report:
this looks to me like the system is already booted using proxmox-boot-tool - in legacy mode - meaning that the grub bootloader is installed to boot kernel images from the second (ESP) Partition
Where is the problem exactly? - some kernel version missing? (then please post `pveversion -v` from this system and `proxmox-boot-tool kernel list`

on other systems I get:
This system does not use proxmox-boot-tool for handling the bootloader config - this is normal when you don't have '/' on ZFS - when was this system setup? - how does the partition layout of the disks look like (`lsblk`)
see: https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool


And finally, on newly installed systems, I get this:
this system looks like it was installed with Proxmox VE 6.x and then upgraded
if you want to get rid of the 5.4 kernel - simply uninstall the kernel-image...
see the reference documentation:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_proxmox_boot_tool (kernel versions considered by proxmox-boot-tool)

I hope this helps!
 
Thanks @Stoiko Ivanov
this looks to me like the system is already booted using proxmox-boot-tool - in legacy mode - meaning that the grub bootloader is installed to boot kernel images from the second (ESP) Partition
Where is the problem exactly? - some kernel version missing? (then please post `pveversion -v` from this system and `proxmox-boot-tool kernel list`

I just find the output of proxmox-boot-tool status confusing then. I would have expected something like:

Bash:
$ proxmox-boot-tool status
System currently booted with legacy bios
XXXX-XXXX is configured with: uefi (versions: 5.15.74-1-pve, 5.15.83-1-pve), grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)

if booted with legacy bios (grub), I would still expect proxmox-boot-tool to report if the uefi partition is correctly configured. So, if I want to migrate such a system to systemd-boot, would I just need to change to EFI boot in BIOS and then cross fingers? Partitions are there, but without the "is configured with: uefi (versions:...)" information, I can only guess proxmox-boot-tool had refreshed that partition correctly.

Again, this is what I get:

Bash:
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  1.7T  0 disk 
├─sda1   8:1    0 1007K  0 part 
├─sda2   8:2    0  512M  0 part 
└─sda3   8:3    0  1.7T  0 part 
sdb      8:16   0  1.7T  0 disk 
├─sdb1   8:17   0 1007K  0 part 
├─sdb2   8:18   0  512M  0 part 
└─sdb3   8:19   0  1.7T  0 part 

$ parted -l /dev/sda
Model: ATA INTEL SSDSC2KG01 (scsi)
Disk /dev/sda: 1920GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB   fat32              boot, esp
 3      538MB   1920GB  1920GB  zfs

$ proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
B0E1-0F27 is configured with: grub (versions: 5.15.64-1-pve, 5.15.74-1-pve)
B0E1-9558 is configured with: grub (versions: 5.15.64-1-pve, 5.15.74-1-pve)

How to ensure current kernels are correctly written to /dev/sda2?

This system does not use proxmox-boot-tool for handling the bootloader config - this is normal when you don't have '/' on ZFS - when was this system setup? - how does the partition layout of the disks look like (`lsblk`)
see: https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

'/' is on ZFS and it looks like any other system:

Bash:
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  1.5T  0 disk
├─sda1   8:1    0 1007K  0 part
├─sda2   8:2    0  512M  0 part
└─sda3   8:3    0  1.5T  0 part
sdb      8:16   0  1.5T  0 disk
├─sdb1   8:17   0 1007K  0 part
├─sdb2   8:18   0  512M  0 part
└─sdb3   8:19   0  1.5T  0 part
sdc      8:32   0  1.5T  0 disk
├─sdc1   8:33   0 1007K  0 part
├─sdc2   8:34   0  512M  0 part
└─sdc3   8:35   0  1.5T  0 part
sdd      8:48   0  1.5T  0 disk
├─sdd1   8:49   0 1007K  0 part
├─sdd2   8:50   0  512M  0 part
└─sdd3   8:51   0  1.5T  0 part

$ parted -l /dev/sda
Model: ATA INTEL SSDSC2BX01 (scsi)
Disk /dev/sda: 1600GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB                      boot, esp
 3      538MB   1600GB  1600GB  zfs

$ zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 01:15:12 with 0 errors on Sun Dec 11 01:39:13 2022
config:
    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sda3    ONLINE       0     0     0
        sdb3    ONLINE       0     0     0
        sdc3    ONLINE       0     0     0
        sdd3    ONLINE       0     0     0

How can I generate /etc/kernel/proxmox-boot-uuids and migrate this to use proxmox-boot-tool?
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool
this system looks like it was installed with Proxmox VE 6.x and then upgraded
if you want to get rid of the 5.4 kernel - simply uninstall the kernel-image...
see the reference documentation:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_proxmox_boot_tool (kernel versions considered by proxmox-boot-tool)

I hope this helps!
That's what I have already checked before, there was no other kernel-image installed on that system except the newer ones: 5.15.74-1-pve, 5.15.83-1-pve

Bash:
$ apt list --installed | grep pve-kernel
pve-kernel-5.15.74-1-pve/stable,now 5.15.74-1 amd64 [installed,automatic]
pve-kernel-5.15.83-1-pve/stable,now 5.15.83-1 amd64 [installed,automatic]
pve-kernel-5.15/stable,now 7.3-1 all [installed,automatic]
pve-kernel-helper/stable,now 7.3-1 all [installed]

Looks like this system was upgraded from an initially installed Proxmox VE 6.3 in 2021-01-20.
Any other idea of how can I get rid of those 5.4 kernel lines? I thought that proxmox-boot-tool refresh cares about this.
 
I just find the output of proxmox-boot-tool status confusing then. I would have expected something like:

Bash:
$ proxmox-boot-tool status
System currently booted with legacy bios
XXXX-XXXX is configured with: uefi (versions: 5.15.74-1-pve, 5.15.83-1-pve), grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)
if booted with legacy bios (grub), I would still expect proxmox-boot-tool to report if the uefi partition is correctly configured. So, if I want to migrate such a system to systemd-boot, would I just need to change to EFI boot in BIOS and then cross fingers? Partitions are there, but without the "is configured with: uefi (versions:...)" information, I can only guess proxmox-boot-tool had refreshed that partition correctly.

I think I see the issue - with older versions of the installer ISO the kernels got added to the ESP in any case for UEFI boot - even if the system was not booted with UEFI.

You cannot directly switch from Legacy to UEFI by just changing the boot-type in BIOS - you'd need to boot a live-CD, import your pool, chroot into the system and then run proxmox-boot-tool init currently

If you just want to get rid of the old kernels from back then - you can simply format and init the ESP again

* `proxmox-boot-tool format /dev/sda2 --force`
* `proxmox-boot-tool init /dev/sda2`
* `proxmox-boot-tool clean`

(similar with the other ESPs)

Any other idea of how can I get rid of those 5.4 kernel lines? I thought that proxmox-boot-tool refresh cares about this.
currently not - it only considers the installed kernels/configs for the current boot-type (legacy vs. uefi)

I hope this explains it.
 
  • Like
Reactions: onlime
I think I see the issue - with older versions of the installer ISO the kernels got added to the ESP in any case for UEFI boot - even if the system was not booted with UEFI.

so, this previously was possible? why then no longer and why is such a complicated workaround needed?

You cannot directly switch from Legacy to UEFI by just changing the boot-type in BIOS - you'd need to boot a live-CD, import your pool, chroot into the system and then run proxmox-boot-tool init currently

I don't quite get what's the point in proxmox-boot-tool then. I thought its sole purpose was to write kernel/boot configurations to both partitions, bios_grub and ESP. What's the difference if I boot a live-CD in comparison to a running Proxmox VE 7.3? The way I boot that live-CD as EFI?

If you just want to get rid of the old kernels from back then - you can simply format and init the ESP again

* `proxmox-boot-tool format /dev/sda2 --force`
* `proxmox-boot-tool init /dev/sda2`
* `proxmox-boot-tool clean`

(similar with the other ESPs)

Great! This fixed it for those two servers where I still had those legacy kernels on ESP.

I also managed to fix some other servers where ESP was not initialized at all, using the same procedure (but without needing to --force format). Now I struggle with one last, where partitioning looks ok (and exactly like on the other servers where this worked):

Bash:
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  1.5T  0 disk
├─sda1   8:1    0 1007K  0 part
├─sda2   8:2    0  512M  0 part
└─sda3   8:3    0  1.5T  0 part
sdb      8:16   0  1.5T  0 disk
├─sdb1   8:17   0 1007K  0 part
├─sdb2   8:18   0  512M  0 part
└─sdb3   8:19   0  1.5T  0 part
sdc      8:32   0  1.5T  0 disk
├─sdc1   8:33   0 1007K  0 part
├─sdc2   8:34   0  512M  0 part
└─sdc3   8:35   0  1.5T  0 part
sdd      8:48   0  1.5T  0 disk
├─sdd1   8:49   0 1007K  0 part
├─sdd2   8:50   0  512M  0 part
└─sdd3   8:51   0  1.5T  0 part

$ parted -l
Model: ATA INTEL SSDSC2BX01 (scsi)
Disk /dev/sda: 1600GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB                      boot, esp
 3      538MB   1600GB  1600GB  zfs

Model: ATA INTEL SSDSC2BX01 (scsi)
Disk /dev/sdb: 1600GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB                      boot, esp
 3      538MB   1600GB  1600GB  zfs

Model: ATA INTEL SSDSC2BX01 (scsi)
Disk /dev/sdc: 1600GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB                      boot, esp
 3      538MB   1600GB  1600GB  zfs

Model: ATA INTEL SSDSC2BX01 (scsi)
Disk /dev/sdd: 1600GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB                      boot, esp
 3      538MB   1600GB  1600GB  zfs
```

but `proxmox-boot-tool init` complains:

Bash:
$ proxmox-boot-tool format /dev/sda2
UUID="" SIZE="536870912" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Formatting '/dev/sda2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.

$ proxmox-boot-tool init /dev/sda2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="" SIZE="536870912" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
E: '/dev/sda2' has wrong filesystem (!= vfat).

hope you have another idea of how to fix that. Thanks!
 
so, this previously was possible? why then no longer and why is such a complicated workaround needed?
No not really - this was from a time when proxmox-boot-tool was exclusively used for ZFS systems on UEFI (which were not supported at all until PVE 5.4) - the support for legacy machines was added quite a bit later (6.4 iirc) since many users upgraded their zpools, which rendered them unbootable

I don't quite get what's the point in proxmox-boot-tool then. I thought its sole purpose was to write kernel/boot configurations to both partitions, bios_grub and ESP. What's the difference if I boot a live-CD in comparison to a running Proxmox VE 7.3? The way I boot that live-CD as EFI?
Not quite - the bios_grub partition is only there for supporting legacy boot from gpt-partitioned disks - see
https://www.gnu.org/software/grub/manual/grub/html_node/BIOS-installation.html

the difference to running in a live-system booted into UEFI, compared to inside PVE booted to legacy is that proxmox-boot-tool only works on the configurations for the current boot-mode

AFAIK simply switching between legacy and UEFI boot renders systems unbootable quite often.

hope you have another idea of how to fix that. Thanks!
on a hunch - sometimes the kernel partition information does not get updated - usually a reboot fixes this.

I hope this helps!
 
  • Like
Reactions: onlime
Hey @Stoiko Ivanov that was simply astonishing, your great support and in-depth knowledge! Thanks a lot for sharing and clearing up all my questions.

I'll do the Live-CD fix / migration to UEFI when I find some time and are near the datacenter.

on a hunch - sometimes the kernel partition information does not get updated - usually a reboot fixes this.
Yes, that was it! `proxmox-boot-tool init` worked fine on all SSDs after rebooting.

Happy New Year! Cheers, Philip
 
  • Like
Reactions: Stoiko Ivanov
Thanks for the nice feedback :)

Glad you fixed the immediate issues!

Happy New 2023 to you as well :)
stoiko
 
Hi Stoiko. I have now tried to proxmox-boot-tool init the ESP partitions from a live system (Proxmox VE 7.3 on USB stick, UEFI booted) in chrooted environment, like this:
  1. Proxmox VE Installer: Advanced Options
  2. Install Proxmox VE (Debug mode)
  3. Ctrl-D
  4. root@proxmox:/#
then running chrooted environment like this:

Bash:
$ zpool import -f rpool

$ zfs set mountpoint=/mnt rpool/ROOT/pve-1
$ mount -t proc /proc /mnt/proc
$ mount --rbind /dev /mnt/dev
$ mount --rbind /sys /mnt/sys
$ chroot /mnt /bin/bash

(chroot)$ lsblk
(chroot)$ proxmox-boot-tool status
System currently booted with uefi
07E8-9B56 is configured with: grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)
0B10-D101 is configured with: grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)
OCF7-8851 is configured with: grub (versions: 5.15.74-1-pve, 5.15.83-1-pve)
(chroot)$ proxmox-boot-tool init /dev/sda2
Re-executing'/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="" SIZE="586870912" FSTYPE="" PARTTYPE="" PKNAME="sda" MOUNTPOINT=""
E: '/dev/sda2' has wrong partition type (!= c12a7828-f81f-11d2-ba4b-00a0c93ec93b)..

# also tried re-formatting:
(chroot)$ proxmox-boot-tool format /dev/sda2
UUID="" SIZE="586870912" FSTYPE="" PARTTYPE="" PKNAME="sda" MOUNTPOINT=""
Setting partition type of '/dev/sda2' to 'c12a7828-f81f-11d2-ba4b-00a0c93ec93b'..
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you 
run partprobe (8) or kpartx(8)
The operation has completed successfully.
Calling 'udevadm settle'..
Formatting '/dev/sda2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.
(chroot)$ partprobe
# also tried rebooting in between...
(chroot)$ proxmox-boot-tool init /dev/sda2
# same problem as above

What's wrong here with that 'c12a7828-f81f-11d2-ba4b-00a0c93ec93b' partition type? Chrooted environment should be fine, as I see my kernels in /boot.
 
on a hunch - try also bindmounting the live-system's /run (in the installer we first bindmount it to some other directory in the target - and then inside the chroot bindmount it to /run) - afair this was to work around a similar issue
 
on a hunch - try also bindmounting the live-system's /run (in the installer we first bindmount it to some other directory in the target - and then inside the chroot bindmount it to /run) - afair this was to work around a similar issue
Thanks, that worked. I was just rbindmounting /run directly:

Bash:
$ zpool import -f rpool
$ zfs set mountpoint=/mnt rpool/ROOT/pve-1
$ mount -t proc /proc /mnt/proc
$ mount --rbind /dev /mnt/dev
$ mount --rbind /sys /mnt/sys
$ mount --rbind /run /mnt/run
$ chroot /mnt /bin/bash

init then worked:

Bash:
(chroot)$ proxmax-boot-tool init /dev/sda2
UUID="29FC-E358" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7828-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Mounting '/dev/sda2' on '/var/tmp/espmounts/23FC-E358'
Installing systemd-boot..
Created "/var/tmp/espmounts/23FC-E358/EFI/systemd".
Created "/var/tmp/espmounts/23FC-E358/EFI/BOOT".
Created "/var/tmp/espmounts/23FC-E358/loader".
Created "/var/tmp/espmounts/23FC-E358/loader/entries".
Created "/var/tmp/espmounts/23FC-E358/EFI/Linux".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/23FC-E358/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/23FC-E358/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/mp/espmounts/23FC-E358/loader/random-seed successfully written (512 bytes)
Successfully initialized system token in EFI variable with 512 bytes
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot.
Unmounting '/dev/sda2'.
Adding '/dev/sda2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'promox-auto-removal'..
Running hook script 'zz-proxmax-boot'..
WARN: /dev/disk/by-uuid/07E8-9B56 does not exist
-clean
"etc/kernel/proxmax-boot-uuids'
WARN: /dev/disk/by-uuid/07E8-9B56 does not exist - clean '/etc/kernel/proxmax-boot-uuids'! - skipping
WARN: /dev/disk/by-uuid/OB10-D101 does not exist - clean '/etc/kernel/proxmax-boot-uuids'! - skipping
WARN: /dev/disk/by-uuid/OCF7-8351 does not exist - clean '/etc/kernel/proxmax-boot-uuids'! - skipping
Copying and configuring kernels on /dev/disk/by-uuid/23FC-E358
        Copying kernel and creating boot-entry for 5.15.74-1-pve
        Copying kernel and creating boot-entry for 5.15.83-1-pve

but... bad luck! This has now rendered the system completely unbootable! I tried to change all PCI-E ports back from LEGACY to EFI and Boot mode to UEFI in BIOS. The SSDs are connected to a LSI/Broadcom HBA 9207-8i, which does not seem to support EFI (or the mainboard won't support it??). Switching everything back to LEGACY won't do the trick. It looks like above proxmox-boot-tool init /dev/sda2 has also touched my bios_grub partitions.

Giving up on this. Will probably need to setup that server from scratch tomorrow. If you have one last idea, please let me know.
 
booting the system in legacy mode into a live-cd(PVE ISO's debug-shell), doing all the bindmounts as above, chrooting and running proxmox-boot-tool format + init in the chroot (in legacymode) should get the system in a bootable state

(also consider running proxmox-boot-tool clean at some point, though this should be mostly cosmetic)
 
thanks, that worked as well. But unluckily, it seems impossible to rescue that server from remote, as it gets stuck on that prompt:

Code:
Cannot import 'rpool': pool was previously in use from another system.
The pool can be imported, use 'zpool import -f' to import the pool.

I can only hardly recognize that prompt from some pixels of IPMI Remote Console Preview, no way to type anything. I have that server also connected to an Aten KVM switch with a Raritan DKX4-101 in front, where video drops out and I never get to see that screen. And I will definitely not install some legacy Java JRE again to get Console over this legacy Supermicro IPMI card running again.
No worries, I was playing around with an older server that is only used for testing, so I can fix that when I am in the datacenter.

Thanks a lot for all your help!
 
Did you try zpool -f import rpool? The "-f" flag is required to force the importing, as ZFS won't let you import a pool on another OS otherwise.
 
Last edited:
  • Like
Reactions: Stoiko Ivanov
Did you try zpool -f import rpool? Th "-f" flag is required to force the importing, as FS won't let you import a pool on another OS otherwise.
Yes, I did so. But this ended up being rather a remote management issue, as it got really hard to further debug this. Video of my remote management solution (Raritan DKX4-101 connected to an Aten KVM switch) dropped out completely, so I was never able to see that black screen and could only type zpool -f import rpool without getting any feedback. After managing that, I run into a kernel panic (again, could only see this through a really bad quality screenshot of my 2nd remote management solution, IPMI).
As I had no option to go to the datacenter and I have already wasted so much time with this recovery, I gave up on it and simply set up the whole server from scratch (which is much faster, as it is fully configured with Ansible).

I remember this happened to me before (kernel panic after successful zpool -f import rpool of a previously in Live-CD(PVE Installer Debug mode) mounted rpool) and I never found a solution for it.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!