PCI Passthrough issues with PVE 8.0.3

Zetto

New Member
Jan 17, 2023
22
1
3
Hey All,

I have re-installed my proxmox server with PVE 8.0.3 (I ran 7.4 before but on a single NVME) so I completely reinstalled it on a Raid1 Sata SSD pool. My first task was to get my Win11 game server running again so I followed the steps in this guide to passthrough the GPU.

https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/

Under PVE 7.4 it worked flawlessly on the first try but now I can't add the PCI Device to my VM and get the "No IOMMU detected, please activate it.See Documentation for further information." I checked the BIOS nothing has changed. VTd is still enabled.

Has anyone experienced the same thing or is able to point me in the right direction?

Server is a HP Z440 with 128GB Ram, Xeon E5-2690 v4, Nvidia RTX 2070

Thanks
 
Did you make the necessary changes to grub to enable IOMMU?
yep, just as the guide describes.....we'll with a slight modification.

The guide suggests to add this line to the grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"

But I previously used this line successfully:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset initcall_blacklist=sysfb_init"

However, I tried both, same result.
 
I did a bit more testing and found out that it is not the PVE version but my installation method that prevents the IOMMU group split.

If I install PVE on a Raid1 2x SATA SSD cluster I only ever get one IOMMU group (tried everything). If I install PVE on only one of the drives I get the IOMMU split that is required for GPU passthrough.

Anybody know why?

Thanks
 
If you are using an EFI system, when using ZFS the host boot loader used is systemd-boot instead of grub. It's easy to notice as the boot screen for systemd-boot is a plain black screen instead of the classic blue one of grub.

That essentially means that maybe you are setting the IOMMU parameters for grub but you should set them up for systemd-boot. Once the system has booted, check the contents of /proc/cmdline to see if that's your boot line.

To edit the kernel parameters:

Grub: /etc/default/grub
Systemd-boot: /etc/kernel/cmdline

Relevant documentation is here: https://pve.proxmox.com/wiki/Host_Bootloader
 
  • Like
Reactions: gseeley
If you are using an EFI system, when using ZFS the host boot loader used is systemd-boot instead of grub. It's easy to notice as the boot screen for systemd-boot is a plain black screen instead of the classic blue one of grub.

That essentially means that maybe you are setting the IOMMU parameters for grub but you should set them up for systemd-boot. Once the system has booted, check the contents of /proc/cmdline to see if that's your boot line.

To edit the kernel parameters:

Grub: /etc/default/grub
Systemd-boot: /etc/kernel/cmdline

Relevant documentation is here: https://pve.proxmox.com/wiki/Host_Bootloader
I have noticed the different boot screen but thought they just changed it for PVE 8. This is the first time installed PVE on a Raid config but I think this is the hint I need.

How would I add the following line into /etc/kernel/cmd line? Or do I just copy it in like this?

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset initcall_blacklist=sysfb_init"

Thank you
 
Simply add quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset initcall_blacklist=sysfb_init to whatever you have now in /etc/kernel/cmdline.
 
Simply add quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset initcall_blacklist=sysfb_init to whatever you have now in /etc/kernel/cmdline.
Hi Victor,

yeah, that's what I did yesterday and after a reboot, everything was split into multiple IOMMU groups. Today however I'm back to a single group and have no idea why. I haven't changed anything on the line above. All I've done between yesterday and today is add more PCie devices.

Now I get an error that IOMMU is not enabled everytime I want to start a VM that has PCIe devices assigned to them.
 
Hi,
i also tried to setup GPU passthrough. I am only looking for the decoding capabilities. My /proc/cmdline looks like this:
intel_iommu=on iommu=pt nofb nomodeset initcall_blacklist=sysfb_init video=efifb:off,vesafb:off

In the VM i get a "dev/dri/card0" but i am missing "renderD128" i used to have in PVE7. Also i get this error in the dmesg (last line):
Bash:
dmesg | grep :00:02
[    0.546965] pci 0000:00:02.0: [8086:46d1] type 00 class 0x030000
[    0.546974] pci 0000:00:02.0: reg 0x10: [mem 0x6000000000-0x6000ffffff 64bit]
[    0.546980] pci 0000:00:02.0: reg 0x18: [mem 0x4000000000-0x400fffffff 64bit pref]
[    0.546985] pci 0000:00:02.0: reg 0x20: [io  0x4000-0x403f]
[    0.547000] pci 0000:00:02.0: BAR 2: assigned to efifb
[    0.547003] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    0.547006] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    0.547029] pci 0000:00:02.0: reg 0x344: [mem 0x00000000-0x00ffffff 64bit]
[    0.547032] pci 0000:00:02.0: VF(n) BAR0 space: [mem 0x00000000-0x06ffffff 64bit] (contains BAR0 for 7 VFs)
[    0.547038] pci 0000:00:02.0: reg 0x34c: [mem 0x00000000-0x1fffffff 64bit pref]
[    0.547040] pci 0000:00:02.0: VF(n) BAR2 space: [mem 0x00000000-0xdfffffff 64bit pref] (contains BAR2 for 7 VFs)
[    0.645771] pci 0000:00:02.0: vgaarb: setting as boot VGA device
[    0.645771] pci 0000:00:02.0: vgaarb: bridge control possible
[    0.645771] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    0.656252] pnp 00:03: disabling [mem 0xc0000000-0xcfffffff] because it overlaps 0000:00:02.0 BAR 9 [mem 0x00000000-0xdfffffff 64bit pref]
[    0.670005] pci 0000:00:02.0: BAR 9: assigned [mem 0x4020000000-0x40ffffffff 64bit pref]
[    0.670011] pci 0000:00:02.0: BAR 7: assigned [mem 0x4010000000-0x4016ffffff 64bit]
[    0.670924] pci 0000:00:02.0: Adding to iommu group 0
[   19.041143] vfio-pci 0000:00:02.0: vgaarb: deactivate vga console
[   19.041177] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[   19.090299] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[   19.090804] vfio-pci 0000:00:02.0: vgaarb: deactivate vga console
[   19.090829] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[   23.889969] vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100
[   27.475428] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x47ce

I tried to get the rom but it fails: Input/output error
Tried what was suggested here but fails: https://github.com/SpaceinvaderOne/Dump_GPU_vBIOS/issues/3

Now i am stuck- any further ideas? Or someone has a ROM file for this iGPU:
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
 
It's quite a large topic and it's powerfull after you spend the proper time. read the wiki and couple guides. Different hardward have different capabality.. quickly.. there's no cmdline, set option properly in grub + the 6-7 other place. as just at the line.. your option are a mix of old and new that will not work and igpu do not have any rom needed. Start with full passthrough with linux system. then adjust after you got all working. couples of day well spend to get the most of your system with prox.
 
I already send a lot of time reading guides, thats why the params might be mixed.. None of them did the trick so far. If you can point me towards a working one?

There is a mistake in my post, it is not "/proc/cmdline" but "/etc/kernel/cmdline". I did reduce the line to nothing but "root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on" but that results the same. The guest VM is linux.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!