[SOLVED] Enabling IOMMU on PVE6 (zfs) compared to PVE5.4 (ext4)?

n1nj4888

Member
Jan 13, 2019
147
8
18
40
Hi All,

I did a clean install from Proxmox 5.4 (using single disk ext4) to Proxmox 6 (using single disk zfs) and notice that I don’t seem to be able to get IOMMU enabled under PVE6?

I followed the following instructions (as I had with PVE5.4) but suspect that because I’m not booting zfs, it’s using systemd-boot to boot rather than grub on PVE5.4 - https://pve.proxmox.com/wiki/Pci_passthrough

Following the above guide I did:

a) nano /etc/default/grub
b) Add “intel_iommu=on” to the following line. I notice that there is a second line here that mentions the ZFS rpool so should “intel_iommu=on” also be added to that line?

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"​

c) Run update-grub
d) Run: dmesg | grep -e DMAR -e IOMMU:

Code:
[    0.012253] ACPI: DMAR 0x0000000079DF1E38 0000A8 (v01 INTEL  NUC8i5BE 00000047      01000013)
[    0.315012] DMAR: Host address width 39
[    0.315014] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.315021] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.315024] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.315029] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.315032] DMAR: RMRR base: 0x00000079d35000 end: 0x00000079d54fff
[    0.315034] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.315038] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.315040] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.315042] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.317210] DMAR-IR: Enabled IRQ remapping in x2apic mode

e) add to /etc/modules:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

f) Reboot the PVE6 host.
g) Verify IOMMU isolation: running find /sys/kernel/iommu_groups/ -type l
h) Nothing is output (empty directory)
i) Proxmox Web GUI also reports “No IOMMU detected, please activate it.See Documentation for further information.” When editing the PCI Device passed through the VM (via VMID -> Hardware -> PCI Device (hostpic0)

Now, at this stage and with some forum searching, I found that there seems to be a new guide PCI(e) Passthrough (https://pve.proxmox.com/wiki/PCI(e)_Passthrough) which is largely the same as the previous guide but with the following differences so I have some questions:

1) The IOMMU has to be activated on the kernel commandline and, for system-boot this is listed as follows: The kernel commandline needs to be placed as line in /etc/kernel/cmdline Running /etc/kernel/postinst.d/zz-pve-efiboot sets it as option line for all config files in loader/entries/proxmox-*.confDoes this mean that I have to:

a) Update the single line in my /etc/kernel/cmdline file to add “intel_iommu=on” as follows?

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on
b) Run /etc/kernel/postinst.d/zz-pve-efiboot after the above change?
c) Run any other commands here before rebooting?​
2) Kernel Modules: After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing: # update-initramfs -u -k all
a) The above update-initramfs command does not seem to be mentioned in the previous guide (https://pve.proxmox.com/wiki/Pci_passthrough)?
b) It is mentioned “If you are using systemd-boot make sure to sync the new initramfs to the bootable partitions” which links to another article which states that “pve-efiboot-tool refresh” is required to be run… Assume this needs to be run even if using a single disk zfs setup?​


Thanks for helping me clear the confusion up!
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
4,609
433
103
31
Vienna
https://pve.proxmox.com/wiki/Pci_passthrough is outdated (i ought to update this page soon)

the reference documentation has the correct information:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot_edit_kernel_cmdline

1) The IOMMU has to be activated on the kernel commandline and, for system-boot this is listed as follows: The kernel commandline needs to be placed as line in /etc/kernel/cmdline Running /etc/kernel/postinst.d/zz-pve-efiboot sets it as option line for all config files in loader/entries/proxmox-*.confDoes this mean that I have to:

a) Update the single line in my /etc/kernel/cmdline file to add “intel_iommu=on” as follows?

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=onb) Run /etc/kernel/postinst.d/zz-pve-efiboot after the above change?
yes

b) It is mentioned “If you are using systemd-boot make sure to sync the new initramfs to the bootable partitions” which links to another article which states that “pve-efiboot-tool refresh” is required to be run… Assume this needs to be run even if using a single disk zfs setup?
this should only be necessary if you have multiple disks with esps ( e.g. a raid1 zfs setup)
 
  • Like
Reactions: DeMoB

n1nj4888

Member
Jan 13, 2019
147
8
18
40
Thanks for the reply - I was able to get this working with the above pointers:)

Marking as SOLVED.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!