[SOLVED] help!!! grub rescue - proxmox 8.1

keropiko

Member
Sep 16, 2019
16
0
21
37
Hello,

i have a 5 years proxmox server installed with the early 5.x versions with zfs. Until now with all the upgrades everything worked perfectly and had version 8.
Yesterday i upgraded and after the reboot i get "GRUB RESCUE - UNKNOWN FILESYSTEM" error.

I have read the guides here and tried to resolve, but since it was from an early version, i don't have the 512mb partitions.

I have booted with liveusb proxmox iso, and can mount the rpool to /mnt and see the files.

Please help me what to do. If i have to reinstall, is there a way to save all data (since zfs makes parts) or run backup throught the liveusb?

I have found a compatibility option for zfs and enabled it to rpool with "grub2" but no result.

Is there a way to resize the disk from the usb iso and create there a 512mb partition without loosing data?

Please help since this is my job server and i am going crazy.

Thank you.
 
i have a 5 years proxmox server installed with the early 5.x versions with zfs. Until now with all the upgrades everything worked perfectly and had version 8.
Yesterday i upgraded and after the reboot i get "GRUB RESCUE - UNKNOWN FILESYSTEM" error.
That's problematic and has happened before on systems that had /boot on ZFS and upgraded the rpool to a newer version than GRUB understands. It's the reason Proxmox switched to proxmox-boot-tool and ESP's. There are threads about this, but it's not easy to switch with an ESP: https://pve.proxmox.com/wiki/Host_Bootloader.
I have read the guides here and tried to resolve, but since it was from an early version, i don't have the 512mb partitions.
In a similar situation without an ESP, I once (Proxmox 4 or 5 or so) put the /boot partition on a USB drive and booted GRUB that way but this was manual, cumbersome and error prone.
I have booted with liveusb proxmox iso, and can mount the rpool to /mnt and see the files.
Have you tried rescue booting with the Proxmox 8.1 installer? I read that it might be able to boot your Proxmox rpool that way.
Please help me what to do. If i have to reinstall, is there a way to save all data (since zfs makes parts) or run backup throught the liveusb?

I have found a compatibility option for zfs and enabled it to rpool with "grub2" but no result.

Is there a way to resize the disk from the usb iso and create there a 512mb partition without loosing data?
I fear that's not possible with a ZFS pool. Maybe use an additional (USB) drive?
Please help since this is my job server and i am going crazy.
Maybe use one of your support tickets from your Proxmox support subscription if it's that important (if you have one?).

Please see if you can boot into your Proxmox with the rescue boot of the Proxmox installer. That will give you time to consider how to fix this later.
 
Hi and thank you for the support, so i just copy the /boot partition to a usb and change grub config?

I haven't figured how to boot from the live usb, i just can mount the rpool.

thank you again.
 
Hi and thank you for the support, so i just copy the /boot partition to a usb and change grub config?
I don't remember the details, sorry.
I haven't figured how to boot from the live usb, i just can mount the rpool.
I don't know that you mean by "live usb". Try to rescue boot the Proxmox 8.1 installer (it's under Advanced Options and then Rescue Boot) , if that works then you will be fine for the immediate future.
 
@leesteken thank you for your answers.

At the end the rescue boot did not work, but i was able to run proxmox through the usb mount the zfs root file, copy the config.db to the usb instance, find and backup the vms and the proxmox configs to external usb drive and at the end format the main ssd and reinstall proxmox from scratch.

I lost 1,5 days but at least no data loss. The other zfs disks with the most vms, apart the main root ssd mounted easily on the new installation just by importing them!

With the new fresh installation, after the last kernel updates i get this message

Code:
run-parts: executing /etc/kernel/postinst.d/proxmox-auto-removal 6.5.11-5-pve /b                                                                                                             oot/vmlinuz-6.5.11-5-pve
run-parts: executing /etc/kernel/postinst.d/zz-proxmox-boot 6.5.11-5-pve /boot/v                                                                                                             mlinuz-6.5.11-5-pve
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount names                                                                                                             pace..
Copying and configuring kernels on /dev/disk/by-uuid/EDB5-B976
        Copying kernel and creating boot-entry for 6.5.11-4-pve
        Copying kernel and creating boot-entry for 6.5.11-5-pve
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 6.5.11-5-pve /boot/vm                                                                                                             linuz-6.5.11-5-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.5.11-5-pve
Found initrd image: /boot/initrd.img-6.5.11-5-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Found linux image: /boot/vmlinuz-6.5.11-4-pve
Found initrd image: /boot/initrd.img-6.5.11-4-pve
/usr/sbin/grub-probe: error: unknown filesystem.
done
Setting up proxmox-kernel-6.5 (6.5.11-5) ...
Setting up proxmox-offline-mirror-helper (0.6.3) ...
Setting up proxmox-headers-6.5 (6.5.11-5) ...
Processing triggers for man-db (2.11.2-2) ...

Everything works after the reboot, but it the "/usr/sbin/grub-probe: error: unknown filesystem." message that still appears normal??

Thank you
 
same here, like @keropiko described. After upgrading PVE 8.1 from 6.5.11-5-pve (reboot yesterday worked fine after 6.5.11-4-pve to 6.5.11-5-pve upgrade) to 6.5.11-6-pve, Grub reported:

Code:
error: no such device: 0dXXXXXXXXX.
error: unknown filesystem.
grub rescue>

In the past, I was always able to resolve such issues via PVE Installer's Debug mode, like this:

Bash:
$ zpool import -f rpool
$ zfs set mountpoint=/mnt rpool/ROOT/pve-1
$ zfs mount rpool/ROOT/pve-1
cannot mount 'rpool/ROOT/pve-1': filesystem already mounted

$ mount -t proc /proc /mnt/proc
$ mount --rbind /dev /mnt/dev
$ mount --rbind /sys /mnt/sys
 
$ chroot /mnt /bin/bash
(chroot)$ source /etc/profile
(chroot)$ grub-install /dev/sda
Installing for i386-pc platform.
grub-install.real: error: unknown filesystem.
(chroot)$ grub-install /dev/sdb
Installing for i386-pc platform.
grub-install.real: error: unknown filesystem.
(chroot)$ update-grub2
(chroot)$ update-initramfs -u
Ctrl-D
 
$ umount /mnt/sys
$ umount /mnt/dev
umount: /mnt/dev: target is busy.
$ umount /mnt/proc
$ zfs unmount rpool/ROOT/pve-1
$ zfs set mountpoint=/ rpool/ROOT/pve-1
Ctrl-D
Ctrl-D

This time, nothing helped (always got error: unknown filesystem.) an I had to give up on it. Re-installing PVE 8.1 from scratch worked fine and at least I did not loose my 22TB of data, as they were on another zpool which I could easily re-import after first boot.

But on first apt dist-upgrade, I've got another error: unknown filesystem.:

Code:
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.5.11-6-pve
Found initrd image: /boot/initrd.img-6.5.11-6-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Found linux image: /boot/vmlinuz-6.5.11-4-pve
Found initrd image: /boot/initrd.img-6.5.11-4-pve
/usr/sbin/grub-probe: error: unknown filesystem.
(...)
update-initramfs: Generating /boot/initrd.img-6.5.11-6-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/C535-1E84
    Copying kernel 6.5.11-4-pve
    Copying kernel 6.5.11-6-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.5.11-6-pve
Found initrd image: /boot/initrd.img-6.5.11-6-pve
Found linux image: /boot/vmlinuz-6.5.11-4-pve
Found initrd image: /boot/initrd.img-6.5.11-4-pve
done
Copying and configuring kernels on /dev/disk/by-uuid/C535-A9FB
    Copying kernel 6.5.11-4-pve
    Copying kernel 6.5.11-6-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.5.11-6-pve
Found initrd image: /boot/initrd.img-6.5.11-6-pve
Found linux image: /boot/vmlinuz-6.5.11-4-pve
Found initrd image: /boot/initrd.img-6.5.11-4-pve
done

this happened on a completely fresh PVE 8.1 system. Reboot then works fine, but still, the /usr/sbin/grub-probe: error: unknown filesystem. are scary!

Hope this is getting resolved before this kernel makes it into pve-enterprise. Thanks!
 
same here, like @keropiko described. After upgrading PVE 8.1 from 6.5.11-5-pve (reboot yesterday worked fine after 6.5.11-4-pve to 6.5.11-5-pve upgrade) to 6.5.11-6-pve, Grub reported:

Code:
error: no such device: 0dXXXXXXXXX.
error: unknown filesystem.
grub rescue>

In the past, I was always able to resolve such issues via PVE Installer's Debug mode, like this:

Bash:
$ zpool import -f rpool
$ zfs set mountpoint=/mnt rpool/ROOT/pve-1
$ zfs mount rpool/ROOT/pve-1
cannot mount 'rpool/ROOT/pve-1': filesystem already mounted

$ mount -t proc /proc /mnt/proc
$ mount --rbind /dev /mnt/dev
$ mount --rbind /sys /mnt/sys
 
$ chroot /mnt /bin/bash
(chroot)$ source /etc/profile
(chroot)$ grub-install /dev/sda
Installing for i386-pc platform.
grub-install.real: error: unknown filesystem.
(chroot)$ grub-install /dev/sdb
Installing for i386-pc platform.
grub-install.real: error: unknown filesystem.
(chroot)$ update-grub2
(chroot)$ update-initramfs -u
Ctrl-D
 
$ umount /mnt/sys
$ umount /mnt/dev
umount: /mnt/dev: target is busy.
$ umount /mnt/proc
$ zfs unmount rpool/ROOT/pve-1
$ zfs set mountpoint=/ rpool/ROOT/pve-1
Ctrl-D
Ctrl-D

This time, nothing helped (always got error: unknown filesystem.) an I had to give up on it.

so this was a legacy system with grub without an ESP? that is known to be problematic (it's basically a timebomb that can explode at any write to /boot), which is why we switched to a different setup.. it's mentioned in the upgrade guide and linked with https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

Re-installing PVE 8.1 from scratch worked fine and at least I did not loose my 22TB of data, as they were on another zpool which I could easily re-import after first boot.

But on first apt dist-upgrade, I've got another error: unknown filesystem.:

Code:
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.5.11-6-pve
Found initrd image: /boot/initrd.img-6.5.11-6-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Found linux image: /boot/vmlinuz-6.5.11-4-pve
Found initrd image: /boot/initrd.img-6.5.11-4-pve
/usr/sbin/grub-probe: error: unknown filesystem.
(...)
update-initramfs: Generating /boot/initrd.img-6.5.11-6-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/C535-1E84
    Copying kernel 6.5.11-4-pve
    Copying kernel 6.5.11-6-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.5.11-6-pve
Found initrd image: /boot/initrd.img-6.5.11-6-pve
Found linux image: /boot/vmlinuz-6.5.11-4-pve
Found initrd image: /boot/initrd.img-6.5.11-4-pve
done
Copying and configuring kernels on /dev/disk/by-uuid/C535-A9FB
    Copying kernel 6.5.11-4-pve
    Copying kernel 6.5.11-6-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.5.11-6-pve
Found initrd image: /boot/initrd.img-6.5.11-6-pve
Found linux image: /boot/vmlinuz-6.5.11-4-pve
Found initrd image: /boot/initrd.img-6.5.11-4-pve
done

this happened on a completely fresh PVE 8.1 system. Reboot then works fine, but still, the /usr/sbin/grub-probe: error: unknown filesystem. are scary!

Hope this is getting resolved before this kernel makes it into pve-enterprise. Thanks!

yes, grub doesn't handle ZFS on / well, which is why we have an override in place that forces the initrd to pick up the rpool when using grub, instead of relying on the broken auto-detection. there is no way to silence the warnings unfortunately.
 
so this was a legacy system with grub without an ESP? that is known to be problematic (it's basically a timebomb that can explode at any write to /boot), which is why we switched to a different setup.. it's mentioned in the upgrade guide and linked with https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

Thanks a lot @fabian for your hint! That was exactly it. I remember trying to solve this a while ago, when I wanted to switch from grub to UEFI boot, but failed because of legacy partitioning:

Bash:
$ proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.

$ lsblk -o +FSTYPE
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS FSTYPE
sdb      8:16   0 558.9G  0 disk             
├─sdb1   8:17   0     1M  0 part             
├─sdb2   8:18   0   128M  0 part             
└─sdb3   8:19   0 558.8G  0 part             zfs_member


Couldn't run proxmox-boot-tool format /dev/sdb2 to fix it, as the bios boot partitions were only 128M, and proxmox-boot-tool requires >256M. I was not aware though, that this was a ticking time bomb. Will now definitely not run zpool upgrade on those legacy systems until I get this sorted out. But I am having a hard time setting up Proxmox VE 8.1 from scratch due to another issue, reported here: https://forum.proxmox.com/threads/black-screen-vga-on-proxmox-ve-8-1-installer-boot.137408/

Best regards,
Philip
 
just as a heads-up:

Will now definitely not run zpool upgrade on those legacy systems until I get this sorted out.

that is not enough. any change to the files in /boot can expose the issue - the ZFS implementation inside Grub is really brittle.
 
so this was a legacy system with grub without an ESP? that is known to be problematic (it's basically a timebomb
F***, it is exploded today during the reboot. The installation is many, many years old, so no ESP partition yet. Therefore reinstall. o_O
 
F***, it is exploded today during the reboot. The installation is many, many years old, so no ESP partition yet. Therefore reinstall. o_O
you can always fix it up by adding another drive (usb stick/..), and use that as ESP via proxmox-boot-tool (even possible on legacy system, the "ESP" is just used as non-zfs-storage in that case). worst case, that drive fails at some point and you need to replace and reinit it - but with proxmox-boot-tool, everything on the ESP is just a copy of the actual files stored on /, so no permanent loss can happen by losing the ESP.
 
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
sdb 8:16 0 558.9G 0 disk
├─sdb1 8:17 0 1M 0 part
├─sdb2 8:18 0 128M 0 part
└─sdb3 8:19 0 558.8G 0 part zfs_member
[/CODE]


Couldn't run proxmox-boot-tool format /dev/sdb2 to fix it, as the bios boot partitions were only 128M, and proxmox-boot-tool requires >256M.

So this apparently meant ESP partition, but I wonder why PVE tool needs >256M. Is that just an arbitrary check or is all the space actually used for something?

EDIT: (this is Debian install with PVE on top)

Code:
/boot/efi/EFI/debian# ls -la
total 5960
drwx------ 2 root root    4096 Nov 30 04:47 .
drwx------ 3 root root    4096 Nov 30 04:47 ..
-rwx------ 1 root root     112 Nov 30 05:03 BOOTX64.CSV
-rwx------ 1 root root   88520 Nov 30 05:03 fbx64.efi
-rwx------ 1 root root     112 Nov 30 05:03 grub.cfg
-rwx------ 1 root root 4201064 Nov 30 05:03 grubx64.efi
-rwx------ 1 root root  850808 Nov 30 05:03 mmx64.efi
-rwx------ 1 root root  941096 Nov 30 05:03 shimx64.efi

/boot/efi/EFI/debian# du -sh .
5.9M    .

@fabian Can he just patch his tool and have it work for many more versions?
 
Last edited:
no, we actually require more space since we put the kernel images and initrds on there as well (they must not be on ZFS for Grub to work reliably, and systemd-boot doesn't understand ZFS either ;))..
 
  • Like
Reactions: esi_y
no, we actually require more space since we put the kernel images and initrds on there as well (they must not be on ZFS for Grub to work reliably, and systemd-boot doesn't understand ZFS either ;))..

So that's the secret sauce :D I don't have any ISO install here now with root ZFS, but I remember it had 2 separate pools there, all along I was assuming there's some magic going on that the initramfs can be on ZFS. With that said, I wonder what's the purpose of the bpool then.

EDIT: On a second thought, maybe I remember wrongly and this was Ubuntu's attempt on ZFS on root once.
 
Last edited:
we don't setup any bpool, that's Ubuntu (and third-party howtos for Debian ;)). the bpool also does not work reliably, it just reduces the chance of outright not working because of features that are not implemented at all in Grub's ZFS parser, it doesn't help with the bugs outside of that...
 
  • Like
Reactions: esi_y
we don't setup any bpool, that's Ubuntu (and third-party howtos for Debian ;)). the bpool also does not work reliably, it just reduces the chance of outright not working because of features that are not implemented at all in Grub's ZFS parser, it doesn't help with the bugs outside of that...
Yeah, sorry for the distraction, I really just thought there could be some help to the 128M ESP as it's plenty for EFI normally. I must have seen that on Ubuntu, it was indeed horrible, everyone ran away and the zsys didn't do it any good either. I have had PTSD from ZFS on root ever since (despite using ZFS for storage). Having it on ESP is not to my taste either, but for a hypervisor I guess you can afford it.

For what it's worth, I can only recommend starting first partition on any systemat +8G - this always allows to put in ESP, /boot or even basic Linux there (on anything) without having to exercise with the zpool following.

Thanks @fabian!
 
Before i do that, i resetup a server.
sure - I just wanted to ensure future readers no that it's possible to get out of that situation without a full re-install (e.g., if / and guest data is mixed a re-install might be more effort).
 
sure - I just wanted to ensure future readers no that it's possible to get out of that situation without a full re-install (e.g., if / and guest data is mixed a re-install might be more effort).
Hey @fabian Thanks for the hint. I also wanted to warn future readers that your workaround ("fix it up by adding another drive") might be the better option, as re-installing PVE from scratch on such a legacy server could fail completely due to such issues: https://forum.proxmox.com/threads/black-screen-vga-on-proxmox-ve-8-1-installer-boot.137408/

This has never happened to me before, but I am now stuck with a non-working server and have 3 more to setup from scratch, but they all run the same hardware, so I cannot risk to destroy another before I find a real solution for this.
 
you can always fix it up by adding another drive (usb stick/..), and use that as ESP via proxmox-boot-tool (even possible on legacy system, the "ESP" is just used as non-zfs-storage in that case). worst case, that drive fails at some point and you need to replace and reinit it - but with proxmox-boot-tool, everything on the ESP is just a copy of the actual files stored on /, so no permanent loss can happen by losing the ESP.

I have the same Problem as described here (old Proxmox Installation without ESP that no langer Boots After a Robot with the error „grub rescue“).

https://forum.proxmox.com/threads/h...ibt-mit-unknown-filesystem-hängen-zfs.139046/

Unfortunately, the USB Stick Solution described here doesn‘t work for me. I get the following error Message:

Code:
proxmox-boot-tool format /dev/sde1 --force
...
Warning! One or more CRCs don´t match. You should repair the disk!
...
Invalid partition data!

proxmox-boot-tool init /dev/sde1
...
E: /dev/sde1 has wrong partition type

What am I doing wrong? Does the USB Stick have to be portioned beforehand?

Thank you

Tony

P. S. Please excuse my bad english. I am not a native speaker.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!