cryptsetup: Waiting for encrypted source device / boot fail after BIOS update

h0a

Member
Sep 28, 2021
17
1
21
Hello everyone,

I am stuck in a difficult situation:

Server is running current and up-to-date Proxmox.
It is a LVM on LUKS on a fake hardware Raid (Intel Raid Controller) on a Supermicro Motherboard.
I upgraded the BIOS and failed to preserve the settings (preserve NVRAM).

It will start booting from grub and then fail to find the fake raid device md126.
Thus it can not decrypt the LUKS partition md126p3.
The error messages I get after grub are:
"cryptsetup: Waiting for encrypted source device
UUID=### "
and
"mdadm: error opening /dev/md?*: No such file or directory"
and
"ALERT! /dev/mapper/srv--vg-root does not exist. Dropping to a shell!"
vg-root_does-not-exist.jpg
blkid will not show the md126 device (as was before / as usual):
blkid.jpg
The devices sda through sdd are the four devices for the RAID10.
Their type is "isw_raid_member" as should be.
They should be automatically assembled as md126.

Here is the cmdline including mdadm=true:
cmdline.jpg
I tried all sorts of combinations in the BIOS settings:
boot mode: UEFI boot or legacy boot or both. Currently it is set to "both".
BIOS-boot.jpg
I tried setting the raid driver to UEFI mode and to legacy ROM:
SATA-RAID.jpg

Secure boot is disabled:
secureboot.jpg

If I bott from the legacy boot entry, I get a black screen and nothing happens.
I can only boot into the bootloader/GRUB with the UEFI boot entry.

I tried also to add efivars to the modules and regenerate the initramfs, but it does not show when I type "cat /proc/modules" in the initramfs shell. So I can not manually mount the efivars partition.

I can get everything to decrypt and mount manually from a LIVE system, so the file systems are OK and nothing is corrupt.
The hardware / hard drives are also OK and healthy.

I am out of options at the moment.
What am I missing here?
I am kind of frustrated after trying for hours ...

Anyone able to help me out of this mess?

Sadly restoring from the backup (PBS) is not an option, since some containers (lvm thin volumes) are missing from the backups.
 
Last edited:
Hello everyone,

excuse my last post, it has been a complicated intertwined series of issues.
What a rabbit hole!
You will learn to love error messages like "mdadm: imsm capabilities not found for controller soandso" and rethink the whole Hardware RAID setup :eek:
First of all, the PVE was not on current major PVE version 9, thus mdadm was outdated.
Efivars/efivarfs are handled differently in the meantime.
Without going any further into detail, just a hint for anyone in the future to save some time and worries:

To temporarily re-assemble the Hardware RAID in the initramfs shell set the mdadm variable IMSM_NO_PLATFORM to true:
IMSM_NO_PLATFORM=1 mdadm -As
man page: https://www.man7.org/linux/man-pages/man8/mdadm.8.html
This works for both the Intel RAID Controller's "legacy ROM" and "UEFI" driver handler settings in the BIOS.

If the driver is set to "UEFI" another alternative way to get the RAID going is:
mount -t efivarfs efivarfs /sys/firmware/efi/efivars and then mdadm -As.

You may also want to check if the correct EFI boot entry is set from within the BIOS or with efibootmgr, preferably if the system was set up as vanilla debian and later converted to a PVE.

Good luck to everyone out there :cool:
 
Last edited:
  • Like
Reactions: Onslow