[SOLVED] IOMMU not enabled on HPE Z640 after following guide

IT WORKED!!!!!!!!!!! Thank you for the help guys!

Would the 5.11 kernel still be good idea to upgrade to?
 
Question @avw , can I still use the display option inside proxmox to get to the GUI if I don't want to be using the nvidia card for the GUI?
 
Question @avw , can I still use the display option inside proxmox to get to the GUI if I don't want to be using the nvidia card for the GUI?
Uncheck the Primary GPU option in the PCI Device setting and select a Display setting that works for you.
I expect this to be problematic with consumer Nvidia cards because their (closed souce) drivers (used to) disrupt usage inside a VM. I have no experience with NVidia for exacly this reason. The Primary GPU option adds some work-arounds for this as well, so you might have to add those work-arounds yourself.
Try it and maybe start a new topic if it does not work? (Please include your VM configuration file and more information about your NVidia card if you do)
 
Hello to all,

ich am unable to activate IOMMU on my ProLiant DL380 gen9 (two Intel E5-2650v3 CPUs, which support IOMMU).

After reading trough the posts... I am wondering...
I changed the line in "/etc/kernel/cmdline" to

Code:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt

and applied the changes with "update-initramfs -u".

Nevertheless, if I run "cat /proc/cmdline", the output is:

Code:
initrd=\EFI\proxmox\5.13.19-2-pve\initrd.img-5.13.19-2-pve #root=ZFS=rpool/ROOT/pve-1 boot=zfs

as if nothing would have been applied?
I also tried to configure it via grub and added the modules to /etc/modules and applied the changes with "update-initramfs" + rebooted the machine.

When I use the command "dmesg | grep -e DMAR -e IOMMU" I get the ouput:

Code:
root@proliant:~# dmesg | grep -e DMAR -e IOMMU
[    0.000000] ACPI: DMAR 0x000000007B7FD000 000294 (v01 HP     ProLiant 00000001 HP   00000001)
[    0.000000] ACPI: Reserving DMAR table memory at [mem 0x7b7fd000-0x7b7fd293]
[    0.000000] DMAR: Host address width 46
[    0.000000] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0
[    0.000000] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[    0.000000] DMAR: DRHD base: 0x000000c7ffc000 flags: 0x1
[    0.000000] DMAR: dmar1: reg_base_addr c7ffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[    0.000000] DMAR: RMRR base: 0x00000079174000 end: 0x00000079176fff
[    0.000000] DMAR: RMRR base: 0x000000791f4000 end: 0x000000791f7fff
[    0.000000] DMAR: RMRR base: 0x000000791de000 end: 0x000000791f3fff
[    0.000000] DMAR: RMRR base: 0x000000791cb000 end: 0x000000791dbfff
[    0.000000] DMAR: RMRR base: 0x000000791dc000 end: 0x000000791ddfff
[    0.000000] DMAR-IR: IOAPIC id 10 under DRHD base  0xfbffc000 IOMMU 0
[    0.000000] DMAR-IR: IOAPIC id 8 under DRHD base  0xc7ffc000 IOMMU 1
[    0.000000] DMAR-IR: IOAPIC id 9 under DRHD base  0xc7ffc000 IOMMU 1
[    0.000000] DMAR-IR: HPET id 0 under DRHD base 0xc7ffc000
[    0.000000] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.000000] DMAR-IR: Enabled IRQ remapping in x2apic mode

I would really appreciate any help!
 
IOMMU (VT-d) is clearly not enabled fully for your system (since there are no multiple IOMMU groups).
In earlier posts your showed that you added the necessary kernel parameters to /etc/default/grub. Are the same parameters also in /etc/kernel/cmdline? Did you run update-initramfs -u and reboot? Can you show the output of cat /proc/cmdline just to make sure that this is not the problem? This does happen sometimes.
Maybe you can check the HP support website for information about this or newer BIOS versions? I'm not familiar with HP systems myself, sorry.
Hi leesteken,

I was following the posts hoping to find the reason why I can not seem to get IOMMU to work on my ProLiant DL380 gen9 (with two Intel E5-2650 CPUs which support IOMMU)...

I changed the lines in "nano /etc/default/grub" and in "nano /etc/kernel/cmdline" according to the Proxmox Wiki and applied the configuration with "update-grub", "proxmox-boot-tool refresh" and "update-initramfs -u" + rebooted the server.

The strange thing is that the command "cat /proc/cmdline" returns the following:

Code:
initrd=\EFI\proxmox\5.13.19-2-pve\initrd.img-5.13.19-2-pve #root=ZFS=rpool/ROOT/pve-1 boot=zfs

and it does not contain the parameters set in /etc/kernel/cmdline with "quiet intel_iommu=on"...

Do you have any ideas on what could be the problem?

Many thanks
 
I changed the lines in "nano /etc/default/grub" and in "nano /etc/kernel/cmdline" according to the Proxmox Wiki and applied the configuration with "update-grub", "proxmox-boot-tool refresh" and "update-initramfs -u" + rebooted the server.

The strange thing is that the command "cat /proc/cmdline" returns the following:

Code:
initrd=\EFI\proxmox\5.13.19-2-pve\initrd.img-5.13.19-2-pve #root=ZFS=rpool/ROOT/pve-1 boot=zfs

and it does not contain the parameters set in /etc/kernel/cmdline with "quiet intel_iommu=on"...

Do you have any ideas on what could be the problem?
Please attach your actuial /etc/default/grub and /etc/kernel/cmdline files. I suspect that you do not use ZFS in combination with UEFI, so the kernel parameters should be determined by GRUB, not systemd-boot.
 
Please attach your actuial /etc/default/grub and /etc/kernel/cmdline files. I suspect that you do not use ZFS in combination with UEFI, so the kernel parameters should be determined by GRUB, not systemd-boot.
leesteken, thanks for taking a look into it!

File /etc/default/grub:

Code:
  GNU nano 5.4                             /etc/default/grub                                     
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
#GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

The file /etc/kernel/cmdline:

Code:
#root=ZFS=rpool/ROOT/pve-1 boot=zfs
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt

I performed a System Reboot after applying the configuration with "update-grub", "proxmox-boot-tool refresh" and "update-initramfs -u".

Thanks again
 
The file /etc/kernel/cmdline:
#root=ZFS=rpool/ROOT/pve-1 boot=zfs root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt
I performed a System Reboot after applying the configuration with "update-grub", "proxmox-boot-tool refresh" and "update-initramfs -u".
I was wrong, your system does use ZFS in combination with UEFI and cat /proc/cmdline does show what you set in /etc/kernel/cmdline. Note that only the first line of /etc/kernel/cmdline is used and it does not support comments (#). Remove the current first line of the file and make sure everything else is on a single (first) line. After the change, you only need to run proxmox-boot-tool refresh.
 
  • Like
Reactions: Stoiko Ivanov
Hi leesteken,

thanks.
The file "/etc/kernel/cmdline" is now just this single line:

Code:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt

I ran "proxmox-boot-tool refresh":

Code:
root@proliant:~# proxmox-boot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/9EBB-2A56
        Copying kernel and creating boot-entry for 5.13.19-2-pve
Copying and configuring kernels on /dev/disk/by-uuid/9EC3-4F7B
        Copying kernel and creating boot-entry for 5.13.19-2-pve
root@proliant:~#

Then I performed a reboot.

When using the command "dmesg | grep -e DMAR -e IOMMU", the output is:

Code:
root@proliant:~# dmesg | grep -e DMAR -e IOMMU
[    0.022230] ACPI: DMAR 0x000000007B7E7000 000294 (v01 HP     ProLiant 00000001 HP   00000001)
[    0.022315] ACPI: Reserving DMAR table memory at [mem 0x7b7e7000-0x7b7e7293]
[    0.589875] DMAR: IOMMU enabled
[    1.400445] DMAR: Host address width 46
[    1.400447] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0
[    1.400459] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[    1.400464] DMAR: DRHD base: 0x000000c7ffc000 flags: 0x1
[    1.400471] DMAR: dmar1: reg_base_addr c7ffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[    1.400475] DMAR: RMRR base: 0x00000079174000 end: 0x00000079176fff
[    1.400479] DMAR: RMRR base: 0x000000791f4000 end: 0x000000791f7fff
[    1.400482] DMAR: RMRR base: 0x000000791de000 end: 0x000000791f3fff
[    1.400485] DMAR: RMRR base: 0x000000791cb000 end: 0x000000791dbfff
[    1.400490] DMAR: RMRR base: 0x000000791dc000 end: 0x000000791ddfff
[    1.400496] DMAR-IR: IOAPIC id 10 under DRHD base  0xfbffc000 IOMMU 0
[    1.400500] DMAR-IR: IOAPIC id 8 under DRHD base  0xc7ffc000 IOMMU 1
[    1.400503] DMAR-IR: IOAPIC id 9 under DRHD base  0xc7ffc000 IOMMU 1
[    1.400505] DMAR-IR: HPET id 0 under DRHD base 0xc7ffc000
[    1.400508] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    1.401535] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    1.790603] DMAR: No ATSR found
[    1.790604] DMAR: No SATC found
[    1.790607] DMAR: dmar0: Using Queued invalidation
[    1.790615] DMAR: dmar1: Using Queued invalidation
[    1.807446] DMAR: Intel(R) Virtualization Technology for Directed I/O

When using the command "dmesg | grep IOMMU", the output is:

Code:
root@proliant:~# dmesg | grep IOMMU
[    0.589875] DMAR: IOMMU enabled
[    1.400496] DMAR-IR: IOAPIC id 10 under DRHD base  0xfbffc000 IOMMU 0
[    1.400500] DMAR-IR: IOAPIC id 8 under DRHD base  0xc7ffc000 IOMMU 1
[    1.400503] DMAR-IR: IOAPIC id 9 under DRHD base  0xc7ffc000 IOMMU 1

This would mean that IOMMU is now active, right? :)

Thanks again!

EDIT: I am now also able to select an exemplary PCI Device (10Gb Network Card) and I am not getting the Message in the GUI about missing IOMMU nor any error message).
THANKS!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!