Where to Add Intel_IOMMU=on and IOMMU=pt?

Nollimox

Member
Mar 9, 2023
267
21
18
I used the systemd tool :

GNU nano 7.2 /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs

This is really frustrating...there is NOTHING in loader/loader.conf

root@nolliprivatecloud:~# find /sys/kernel/iommu_groups/ -type l
root@nolliprivatecloud:~# nano /etc/default/system.d
root@nolliprivatecloud:~# ls /etc/default/system.d
ls: cannot access '/etc/default/system.d': No such file or directory
root@nolliprivatecloud:~# nano /etc/kernel/cmdline
root@nolliprivatecloud:~# nano loader/loader.conf
root@nolliprivatecloud:~# dmesg | grep -e DMAR -e IOMMU -e
grep: option requires an argument -- 'e'
Usage: grep [OPTION]... PATTERNS [FILE]...
Try 'grep --help' for more information.
root@nolliprivatecloud:~# dmesg | grep -e DMAR -e IOMMU
[ 0.010307] ACPI: DMAR 0x0000000079823C70 0000C8 (v01 INTEL EDK2 00000002 01000013)
[ 0.010351] ACPI: Reserving DMAR table memory at [mem 0x79823c70-0x79823d37]
[ 0.224625] DMAR: Host address width 39
[ 0.224627] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.224634] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.224639] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.224643] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.224647] DMAR: RMRR base: 0x00000079687000 end: 0x000000796a6fff
[ 0.224649] DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff
[ 0.224652] DMAR: RMRR base: 0x00000079739000 end: 0x000000797b8fff
[ 0.224655] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.224657] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.224660] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.227817] DMAR-IR: Enabled IRQ remapping in x2apic mode
root@nolliprivatecloud:~# nano /etc/kernel/cmdline
root@nolliprivatecloud:~#

YES, VT is enable in the bios...I am reinstalling Proxmox 8...
 
Last edited:
It's there:
GNU nano 7.2 /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs
Intel_iommu=on
Intel_iommu=pt

rivatecloud:~# dmesg | grep -e DMAR -e IOMMU
[ 0.010816] ACPI: DMAR 0x0000000079823C70 0000C8 (v01 INTEL EDK2 00000002 01000013)
[ 0.010861] ACPI: Reserving DMAR table memory at [mem 0x79823c70-0x79823d37]
[ 0.225126] DMAR: Host address width 39
[ 0.225128] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.225135] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.225140] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.225144] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.225148] DMAR: RMRR base: 0x00000079687000 end: 0x000000796a6fff
[ 0.225151] DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff
[ 0.225153] DMAR: RMRR base: 0x00000079739000 end: 0x000000797b8fff
[ 0.225156] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.225159] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.225161] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.228317] DMAR-IR: Enabled IRQ remapping in x2apic mode
root@nolliprivatecloud:~#
 
Hi Nollimox, i'm far from an expert, but I think this is the way:

Enabling IOMMU​

  • Access the Proxmox VE console via an external monitor or through the Shell on the web management interface
  • Type and enter: nano /etc/default/grub
  • Add intel_iommu=on to GRUB_CMDLINE_LINUX_DEFAULT=”quiet” (See the screenshot below)
  • Write Out the settings and Exit
  • Run the command update-grub to finalize changes
  • Reboot your Vault

Somehow the screenshot went missing. Here is the link from where I got it.
Protectli
 
Last edited:
It's there:
GNU nano 7.2 /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs
Intel_iommu=on
Intel_iommu=pt
I can't tell for sure (from the way you formatted this) but it looks like you put them on separate lines, which does not work. Everything has to be on the first line (as is stated in the manual but not very clearly maybe) or it will be ignored. Use root=ZFS=rpool/ROOT/pve-1 boot=zfs Intel_iommu=on Intel_iommu=pt instead.
 
Use root=ZFS=rpool/ROOT/pve-1 boot=zfs Intel_iommu=on Intel_iommu=pt instead.
GNU nano 7.2 /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs Intel_iommu=on Intel_iommu=pt

And:

root@nollicomm:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
IOMMU group * 00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers [8086:3ec2] (rev 07)
00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
00:02.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e92]
00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
00:12.0 Signal processing controller [1180]: Intel Corporation Cannon Lake PCH Thermal Controller [8086:a379] (rev 10)
00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #0 [8086:a368] (rev 10)
00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
00:17.0 SATA controller [0106]: Intel Corporation Cannon Lake PCH SATA AHCI Controller [8086:a352] (rev 10)
00:1b.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 [8086:a340] (rev f0)
00:1b.4 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #21 [8086:a32c] (rev f0)
00:1d.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 [8086:a330] (rev f0)
00:1d.2 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #11 [8086:a332] (rev f0)
00:1f.0 ISA bridge [0601]: Intel Corporation Cannon Point-LP LPC Controller [8086:a309] (rev 10)
00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-LM [8086:15bb] (rev 10)
01:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)
02:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)
02:08.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)
03:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435]
05:00.0 Non-Volatile memory controller [0108]: Kingston Technology Company, Inc. Device [2646:5017] (rev 03)
06:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
06:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
06:00.2 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
06:00.3 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
08:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)

root@nollicomm:~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/3A8A-B073
Copying kernel and creating boot-entry for 6.2.16-3-pve

root@nollicomm:~# dmesg | grep -e DMAR -e IOMMU
[ 0.010085] ACPI: DMAR 0x0000000079823C70 0000C8 (v01 INTEL EDK2 00000002 01000013)
[ 0.010126] ACPI: Reserving DMAR table memory at [mem 0x79823c70-0x79823d37]
[ 0.224300] DMAR: Host address width 39
[ 0.224302] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.224309] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.224313] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.224317] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.224321] DMAR: RMRR base: 0x00000079687000 end: 0x000000796a6fff
[ 0.224324] DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff
[ 0.224327] DMAR: RMRR base: 0x00000079739000 end: 0x000000797b8fff
[ 0.224330] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.224333] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.224335] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.227469] DMAR-IR: Enabled IRQ remapping in x2apic mode
root@nollicomm:~#

Still not passed-through despite getting the bios to recognize the Intel QAT adapter...it was on auto to pick generation PCIe and after shifting to gen2, it was recognized. Enough for tonight...
 
GNU nano 7.2 /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs Intel_iommu=on Intel_iommu=pt

And:

root@nollicomm:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
IOMMU group * ...

Still not passed-through despite getting the bios to recognize the Intel QAT adapter...it was on auto to pick generation PCIe and after shifting to gen2, it was recognized. Enough for tonight...
What is the output of cat /proc/cmdline? Is VT-d enabled in the motherboard BIOS and supported by both the motherboard and CPU? What make and model is the motherboard, what is the CPU?
 
Okay, @leesteken - thank you for your patience and foresight...I need to learn (without emotional tantrum). So, it's after a good sleep, it became apparent. I had read, did re-read this:

Host Configuration:
In this case, the host must not use the card. There are two methods to achieve this:

pass the device IDs to the options of the vfio-pci modules by adding

options vfio-pci ids=1234:5678,4321:8765

to a .conf file in /etc/modprobe.d/ where 1234:5678 and 4321:8765 are the vendor and device IDs obtained by:

# lspci -nn

blacklist the driver completely on the host, ensuring that it is free to bind for passthrough, with

blacklist DRIVERNAME

in a .conf file in /etc/modprobe.d/.

So, I did this:

GNU nano 7.2 /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

vfio
vfio_iommu_type1
vfio_pci
options vfio-pci ids=0604:0001, 0200:0300
vfio_virqfd

And this:

GNU nano 7.2 /etc/modprobe.d/pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE

# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
blacklist pci bridge
blacklist ethernet controler

Now, I know why it didn't work...one was notation and the other is not paying attention to the info that appears overwhelming at first. After casually looking, I realize I have lots more to add:

Screenshot 2023-08-03 at 5.20.12 PM.png
And this:

Screenshot 2023-08-03 at 6.00.27 PM.png
So: Thank you!
 
Last edited:
What is the output of cat /proc/cmdline? Is VT-d enabled in the motherboard BIOS and supported by both the motherboard and CPU? What make and model is the motherboard, what is the CPU?
root@nollicomm:~# cat /proc/cmdline
initrd=\EFI\proxmox\6.2.16-3-pve\initrd.img-6.2.16-3-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs Intel_iommu=on Intel_iommu=pt
root@nollicomm:~#

Hardware: Dell Precision 3630MT, 64GB RAM, M2.nvme -250GB, NIC i350 + on-board, Intel QAT 8950
CPU i7-8700, 3.2MHZ
VT-d=Enabled

What I have done:
GNU nano 7.2 /etc/modules *
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

vfio
vfio_iommu_type1
vfio_pci
options vfio-pci ids=8086:1901, 8086:a340, 8086:a32c, 8086:a330, 8086:a332, 8086:a348, 10bs:8724, 8086:0435, 8086:1521
vfio_virqfd

AND THIS:
GNU nano 7.2 /etc/modprobe.d/pve-blacklist.conf *
# This file contains a list of modules which are not supported by Proxmox VE

# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
blacklist pcieport
blacklist igb
blacklist dh895xcc
blacklist snd_hda_intel

But, that didn't work and seems to made sure I didn't the i350Nic. So, it seems that I have interdependent. I removed the blacklist since, you can have A or B, then, update-initramfs -u -k all

Still not enabled:
oot@nollicomm:~# dmesg | grep -e DMAR -e IOMMU
[ 0.010248] ACPI: DMAR 0x0000000079823C70 0000C8 (v01 INTEL EDK2 00000002 01000013)
[ 0.010289] ACPI: Reserving DMAR table memory at [mem 0x79823c70-0x79823d37]
[ 0.224585] DMAR: Host address width 39
[ 0.224587] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.224595] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.224599] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.224603] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.224607] DMAR: RMRR base: 0x00000079687000 end: 0x000000796a6fff
[ 0.224610] DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff
[ 0.224612] DMAR: RMRR base: 0x00000079739000 end: 0x000000797b8fff
[ 0.224615] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.224618] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.224620] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.227790] DMAR-IR: Enabled IRQ remapping in x2apic mode
root@nollicomm:~#

This is how it should look as you know:

Screenshot 2023-08-04 at 7.05.36 AM.png
 
Late to the party here, but it appears (if you copy/pasted), that your kernel command line arguments are incorrect. In both instances, you've used Intel (as opposed to intel, notice the capitalization).
Actually, the problem was hardware, coffee lake/Pve 8, which was given away and my new computer should arrive tomorrow...thanks for sharing though
 
  • Like
Reactions: TheRealMaN_

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!