Hello everyone
I've added "pcie_acs_override", according to documentation to my GRUB config, like this:
(I wrote a small HowTo in my forum at "Das-Werkstatt", in case anyone wants details)
All (IOMMU separation and passthrough) works fine and peachy, BUT:
Rebooting the host caused it to trigger a new boot screen (which I guess is the "mokmanager.efi"?):
...which keeps rebooting the machine in an endless-loop after a few seconds, as the default option shown is:"reset system"
Until anyone chooses to either "continue boot" - or (=current workaround to this issue): select "Always continue boot".
So far so good, yet I'm wondering (therefore this post here), if there's anything I'm missing, or should have done that I'm unaware of?
What is the reason for the mokmanager.efi showing up in the first place? Because a new kernel module was loaded?
Grateful for any information or insights.
Thank you!
I've added "pcie_acs_override", according to documentation to my GRUB config, like this:
INI:
GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt amd_iommu=on pcie_acs_override=downstream,multifunction"
All (IOMMU separation and passthrough) works fine and peachy, BUT:
Rebooting the host caused it to trigger a new boot screen (which I guess is the "mokmanager.efi"?):
...which keeps rebooting the machine in an endless-loop after a few seconds, as the default option shown is:"reset system"
Until anyone chooses to either "continue boot" - or (=current workaround to this issue): select "Always continue boot".
So far so good, yet I'm wondering (therefore this post here), if there's anything I'm missing, or should have done that I'm unaware of?
What is the reason for the mokmanager.efi showing up in the first place? Because a new kernel module was loaded?
Grateful for any information or insights.
Thank you!