Are these Problems a deathsentence for my motherboard? Proxmox 2080ti Passthrough on 9700k & Z370P-D3

lewinernst

Member
Jul 31, 2021
14
0
6
27
Hello everyone, i have been trying different guides (beginners guide on /r/homelab, craftcomputing, proxmox official) with my 9700k on a Z370P-D3 running a fresh proxmox 7.0.8 to pass through my 2080ti. However, proxmox still shows “No IOMMU detected, please activate it. See Documentation for further information.”
At this point i dont know where to look for new troubleshooting steps - i don’t know what’s not working.
I have enabled VT-D in BIOS, however there is no separate IOMMU setting.
I have tried both common grubline options.
I have tried blacklisting drivers and adding device ids /etc/modprobe.d/vfio.conf disable_vga (first 2 and all 4)
My lspci -t looks pretty nicely separated (GPU is 01:00.0 to 01:00.4 with gpu, sound, hub and USB). Does this already show iommu groups or is this just the bus layout? Is it possible my Board doesnt support iommu at all and i’ve been wasting my time encouraged by this nice layout?

Code:
-[0000:00]-+-00.0
+-01.0-[01]--+-00.0
               |            +-00.1
               |            +-00.2
               |            \-00.3
               +-02.0
               +-08.0
               +-14.0
               +-16.0
               +-17.0
+-1b.0-[02]--
+-1b.2-[03]--
+-1b.3-[04]--
+-1b.4-[05]--
+-1c.0-[06]--
+-1c.2-[07]----00.0
+-1c.3-[08]----00.0
+-1c.4-[09]--
+-1d.0-[0a]--
               +-1f.0
               +-1f.2
               +-1f.3
               \-1f.4

The commonly suggested troubleshooting dmesg | grep shows the following:
Code:
dmesg | grep -e DMAR -e IOMMU
[    0.007009] ACPI: DMAR 0x00000000B8F569B8 0000A8 (v01 ALASKA A M I    00000001 INTL 00000001)
[    0.007037] ACPI: Reserving DMAR table memory at [mem 0xb8f569b8-0xb8f56a5f]
[    0.114700] DMAR: Host address width 39
[    0.114701] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.114705] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.114708] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.114710] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.114712] DMAR: RMRR base: 0x000000b97dd000 end: 0x000000b9a26fff
[    0.114714] DMAR: RMRR base: 0x000000bb000000 end: 0x000000bf7fffff
[    0.114715] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.114717] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.114718] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.116142] DMAR-IR: Enabled IRQ remapping in x2apic mode

I also found this script in another troubleshooting post, yielding the following

Code:
for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort --version-sort
find: ‘/sys/kernel/iommu_groups/*’: No such file or directory.


Thank you very much for any hint what to look for! If this is a problem with my motherboard, i would be glad for any hints of compatlible motherboards you may have gotten to work with this cpu.
 
have you run update-grub (or proxmox-boot-tool refresh) after editing the commandline options - best to check `cat /proc/cmdline` after activating it and rebooting.
see:
https://pve.proxmox.com/pve-docs/chapter-qm.html#_general_requirements
and
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline
Hi, i did a fresh install, modified and updated grub again and your cat shows:
Bash:
initrd=\EFI\proxmox\5.11.22-1-pve\initrd.img-5.11.22-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs

You are probably on to something: I assume this means it loaded the line below the one i modified? My /etc/default/grub:

Bash:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox VE"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
 
I now edited /etc/kernel/cmdline instead (just added intel_iommu=on without quotes to the end), ran proxmox-boot-tool refresh and voila:
Bash:
root@homelab:~# dmesg | grep -e DMAR -e IOMMU
[    0.007031] ACPI: DMAR 0x00000000B93099B0 0000A8 (v01 ALASKA A M I    00000001 INTL 00000001)
[    0.007057] ACPI: Reserving DMAR table memory at [mem 0xb93099b0-0xb9309a57]
[    0.047689] DMAR: IOMMU enabled
[    0.114617] DMAR: Host address width 39
[    0.114618] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.114622] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.114625] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.114627] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.114629] DMAR: RMRR base: 0x000000b97de000 end: 0x000000b9a27fff
[    0.114631] DMAR: RMRR base: 0x000000bb000000 end: 0x000000bf7fffff
[    0.114633] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.114634] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.114635] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.116052] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    1.693741] DMAR: No ATSR found
[    1.693742] DMAR: dmar0: Using Queued invalidation
[    1.693745] DMAR: dmar1: Using Queued invalidation
[    1.695410] DMAR: Intel(R) Virtualization Technology for Directed I/O

If i had to guess what was up: It seems proxmox-ve in my case doesnt use grub but instead uses systemd-boot which is documented for multiple drive zfs-boot volume situations. I just set raid 0 zfs with a single drive on intel because i wanted to check out zfs for the boot drive and the same thing happened. Does this make any sense as an explanation?
 
If i had to guess what was up: It seems proxmox-ve in my case doesnt use grub but instead uses systemd-boot which is documented for multiple drive zfs-boot volume situations.
PVE uses systemd-boot for all ZFS installations on systems booting from UEFI. - Where did you pick up that this is only for multiple-drive/RAID setups? (maybe we can improve the documentation in that point)

but yes - the explanation of PVE being booted with systemd-boot (and thus you needing to edit /etc/kernel/cmdline instead of /etc/default/grub) sounds correct :)

I hope this helps!
 
PVE uses systemd-boot for all ZFS installations on systems booting from UEFI. - Where did you pick up that this is only for multiple-drive/RAID setups? (maybe we can improve the documentation in that point)

but yes - the explanation of PVE being booted with systemd-boot (and thus you needing to edit /etc/kernel/cmdline instead of /etc/default/grub) sounds correct :)

I hope this helps!
Hi, thanks for the feedback and the help It was a comment in another similar problem thread, i dont think its a doc problem. Maybe it would be useful as a hint in the official passthrough guide since most community guides i saw dont mention the systemd way.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!