Samsung PM983a Enterprise NVME: Failed to flush random seed file: Time out when using ZFS boot

anime

New Member
Apr 2, 2024
12
1
3
Hi all,

I'm dumbfounded by this issue than only manifest when I'm using this particular drive as ZFS RAID0 for PVE host boot. Installation of PVE host goes smoothly (secure boot turned off) but upon reboot I'm presented with red text saying the following and takes me back to BIOS screen:

Failed to flush random seed file: Time out
Error opening root path: Time out

This drive is an enterprise M.2 22110 and is fully healthy, I can boot PVE host off of it if I keep the filesystem as EXT.

I also know that non-enterprise NVME plays well with ZFS as boot disk for PVE host. So there must be something along the encryption line on these enterpise causing this issue.

Anybody experienced anything similar or shed some light on possible workaround?

Thanks a lot.
 
Last edited:
Adding additional information if that is of any help to resolve this issue. Above error was with PVE 8 during boot.

With PVE 7, I still get similar error "Failed to flush random seed file: C0798...."

Any help is much appreciated.
 
I don't recognize the error, but I use these drives in all my hosts without any issues. Maybe a firmware update would we worth giving a try?

Thanks for the insight, definitely worth a try.

How do you go about getting the latest firmware for enterprise SDD? I would imagine Samsung Magician won't work with these drives.
 
Hi all,

I'm dumbfounded by this issue than only manifest when I'm using this particular drive as ZFS RAID0 for PVE host boot. Installation of PVE host goes smoothly (secure boot turned off) but upon reboot I'm presented with red text saying the following and takes me back to BIOS screen:

Failed to flush random seed file: Time out
Error opening root path: Time out

This drive is an enterprise M.2 22110 and is fully healthy, I can boot PVE host off of it if I keep the filesystem as EXT.

I also know that non-enterprise NVME plays well with ZFS as boot disk for PVE host. So there must be something along the encryption line on these enterpise causing this issue.

Anybody experienced anything similar or shed some light on possible workaround?

Thanks a lot.
I'm having a similar issue with my 2 new Samsung PM983 drives. Can't seem to figure it out...
 
  • Like
Reactions: anime
It has been a while, but back when we set up our system with ZFS-boot on SSDs, we had to configure grub to delay the ZFS invocation. Maybe you need to add something like rootdelay=30 to grub-cmd:
https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Boot_fails_and_goes_into_busybox

Thanks for the pointer. I'm using systemd-boot not GRUB, have to update as following:

Code:
edit /etc/default/zfs, set ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4', and then issue a "update-initramfs -k 4.2.6-1-pve -u"

Because I'm stuck at boot with the error mentioned above and don't have access to the prompt, how do I go about editing and updating the boot image?
 
how do I go about editing and updating the boot image?
I think I am stuck near where you are. I was able to do the editing after running the following command:

zpool import -R /rpool -N rpool

Now I am stuck at the next step. The command in the documentation is specific to one kernel version. If you want to apply this change to all, which I think is the correct way to do it, you use the following command:

update-initramfs -k all -u

However, this command fails with the following error:

sh: update-initramfs: not found

So I am still stuck unable to boot.

Side note: ZFS_INITRD_PRE_MOUNTROOT_SLEEP doesn't exist in /etc/default/zfs. I had to append the key/value pair to the end of that file. Because I am unable to perform the next step, I don't know if this config change works or not.
 
Last edited:
  • Like
Reactions: anime
I think I am stuck near where you are. I was able to do the editing after running the following command:

zpool import -R /rpool -N rpool

Now I am stuck at the next step. The command in the documentation is specific to one kernel version. If you want to apply this change to all, which I think is the correct way to do it, you use the following command:

update-initramfs -k all -u

However, this command fails with the following error:

sh: update-initramfs: not found

So I am still stuck unable to boot.

Side note: ZFS_INITRD_PRE_MOUNTROOT_SLEEP doesn't exist in /etc/default/zfs. I had to append the key/value pair to the end of that file. Because I am unable to perform the next step, I don't know if this config change works or not.

Right. Apparently, for systemd boot I could hit "e" to make temporary non-sticky change to add
Code:
rootdelay=30
and hit "Enter" to no avail. I still throws the same error.

Too bad I have to use an extra NVME as boot disk just to circumvent this issue. Would love to hear if someone out there have figured how to get away without it.
 
I have scrapped trying to get ZFS working for the moment. I was able to install with zero issues using BTRFS. Unfortunately, it is a tech preview, so I am unsure if this is a configuration I want to use.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!