Installation of 6.4 fails on ZFS root

May 27, 2021
23
1
8
45
Install process completes fine. After reboot, ZFS pools are missing and system is unable to boot.

SuperMicro X7DBR-3, BIOS 2.1c
Adaptec AIC-9410 (onboard)
2xSAS 74GB in JBOD

Installing from ISO on USB (sdc).

Tried with 6.3 and it fails the same way.
Trying to install with ext4 at sda fails at the end of process with message "unable to initialize sdc" (that is the USB stick).

Please advise!
 
Booted Live USB / Ubuntu 20 and I see no zpools available on any of both disks.
For this reason I believe something goes wrong with ZFS pools during installation time which is causing the pool not to be created.
 
I was able to switch to terminal and examine the system.
It is obvious that zpool and zfs are created during installation.
Found some errors in dmesg.

Please advise.

IMG_5517.jpegIMG_5518.jpeg
 
You just need to add the delay option. I don't know why they don't do this by default. I see it so much.

/etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=10 quiet"

to get your system to boot just

zpool import -N 'rpool'
exit

EDIT:
Note after changing /etc/default/grub read note at top
# If you change this file, run 'update-grub' afterwards to update
 
Last edited:
  • Like
Reactions: jsterr
You just need to add the delay option. I don't know why they don't do this by default. I see it so much.

/etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=10 quiet"

to get your system to boot just

zpool import -N 'rpool'
exit

EDIT:
Note after changing /etc/default/grub read note at top
# If you change this file, run 'update-grub' afterwards to update
and if you are using systemd-boot, see the following doc (Bottom of the page): https://pve.proxmox.com/wiki/Host_Bootloader

Systemd-boot​

The kernel commandline needs to be placed as one line in /etc/kernel/cmdline. To apply your changes, run proxmox-boot-tool refresh, which sets it as the option line for all config files in loader/entries/proxmox-*.conf.

So that would be something like:
Code:
root=ZFS=rpool/ROOT/pve-1 boot=zfs rootdelay=10
 
Hello all and thanks for advices. System uses GRUB and I was able to test your recommendation by simply editing the grub commands at boot time.
I have added rootdelay=10 and indeed the kernel waited 10 seconds before trying to locate the root filesystem.

However this attempt was not successful. The main problem is that there are no zpools available in the system at all after rebooting on install end. I have confirmed that by booting live linux usb and inspecting hard drives. They have partition tables but no zpools available (zfs module is loaded). 'rpool' simply does not exist in the system (?!).

IMG_5520.jpeg
 
Hi Again, i've been digging deeper on the issue.

Inspecting the installer log i see that it is exporting the pool at the end of the process - photo 1.
Further, booting the system is fine but it is unable to import 'rpool'. Manually importing 'rpool' does not work. Manually importing the pool by disk-id works - photo 2.

But the problem is that the pool information is not stored and it is not imported again on next reboot. Please advise how to make the pool information static so it can be imported at boot time?

IMG_5523.jpegIMG_5524.jpeg
 
Code:
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
Condition: start condition failed at Fri 2021-05-28 14:12:13 EEST; 1h 28min ago
           └─ ConditionFileNotEmpty=/etc/zfs/zpool.cache was not met
     Docs: man:zpool(8)

May 28 14:12:13 ic1 systemd[1]: Condition check resulted in Import ZFS pools by cache file being skipped
 
That was one of the first things that I checked. scsi-*** ids are the same during install and during boot time.
It is as simple as /etc/zfs/zpool.cache file missing. Once creating it and updating initramfs, system started booting properly mounting rpool.
I think this is some kind of bug in installer that shows up in my specific hardware configuration.
 
That was one of the first things that I checked. scsi-*** ids are the same during install and during boot time.
It is as simple as /etc/zfs/zpool.cache file missing. Once creating it and updating initramfs, system started booting properly mounting rpool.
I think this is some kind of bug in installer that shows up in my specific hardware configuration.
Glad you figured it out, that seems to be a common issue, maybe they can resolve this?
 
Same issue with proxmox 6.2/6.3 on two Servers. Both with:
Supermicro X11SSH-F / Xeon E3-1220 / 2x 480GB Samsung SSD.

A "zpool import -N 'rpool'" does not work, because there are no zpools available.
Installation on a single Disc does not work either, because of the same error above.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!