[SOLVED] Enabling IOMMU on PVE6 (zfs) compared to PVE5.4 (ext4)?

n1nj4888

Well-Known Member
Jan 13, 2019
162
22
58
44
Hi All,

I did a clean install from Proxmox 5.4 (using single disk ext4) to Proxmox 6 (using single disk zfs) and notice that I don’t seem to be able to get IOMMU enabled under PVE6?

I followed the following instructions (as I had with PVE5.4) but suspect that because I’m not booting zfs, it’s using systemd-boot to boot rather than grub on PVE5.4 - https://pve.proxmox.com/wiki/Pci_passthrough

Following the above guide I did:

a) nano /etc/default/grub
b) Add “intel_iommu=on” to the following line. I notice that there is a second line here that mentions the ZFS rpool so should “intel_iommu=on” also be added to that line?

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"​

c) Run update-grub
d) Run: dmesg | grep -e DMAR -e IOMMU:

Code:
[    0.012253] ACPI: DMAR 0x0000000079DF1E38 0000A8 (v01 INTEL  NUC8i5BE 00000047      01000013)
[    0.315012] DMAR: Host address width 39
[    0.315014] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.315021] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.315024] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.315029] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.315032] DMAR: RMRR base: 0x00000079d35000 end: 0x00000079d54fff
[    0.315034] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.315038] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.315040] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.315042] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.317210] DMAR-IR: Enabled IRQ remapping in x2apic mode

e) add to /etc/modules:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

f) Reboot the PVE6 host.
g) Verify IOMMU isolation: running find /sys/kernel/iommu_groups/ -type l
h) Nothing is output (empty directory)
i) Proxmox Web GUI also reports “No IOMMU detected, please activate it.See Documentation for further information.” When editing the PCI Device passed through the VM (via VMID -> Hardware -> PCI Device (hostpic0)

Now, at this stage and with some forum searching, I found that there seems to be a new guide PCI(e) Passthrough (https://pve.proxmox.com/wiki/PCI(e)_Passthrough) which is largely the same as the previous guide but with the following differences so I have some questions:

1) The IOMMU has to be activated on the kernel commandline and, for system-boot this is listed as follows: The kernel commandline needs to be placed as line in /etc/kernel/cmdline Running /etc/kernel/postinst.d/zz-pve-efiboot sets it as option line for all config files in loader/entries/proxmox-*.confDoes this mean that I have to:

a) Update the single line in my /etc/kernel/cmdline file to add “intel_iommu=on” as follows?

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on
b) Run /etc/kernel/postinst.d/zz-pve-efiboot after the above change?
c) Run any other commands here before rebooting?​
2) Kernel Modules: After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing: # update-initramfs -u -k all
a) The above update-initramfs command does not seem to be mentioned in the previous guide (https://pve.proxmox.com/wiki/Pci_passthrough)?
b) It is mentioned “If you are using systemd-boot make sure to sync the new initramfs to the bootable partitions” which links to another article which states that “pve-efiboot-tool refresh” is required to be run… Assume this needs to be run even if using a single disk zfs setup?​


Thanks for helping me clear the confusion up!
 
https://pve.proxmox.com/wiki/Pci_passthrough is outdated (i ought to update this page soon)

the reference documentation has the correct information:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot_edit_kernel_cmdline

1) The IOMMU has to be activated on the kernel commandline and, for system-boot this is listed as follows: The kernel commandline needs to be placed as line in /etc/kernel/cmdline Running /etc/kernel/postinst.d/zz-pve-efiboot sets it as option line for all config files in loader/entries/proxmox-*.confDoes this mean that I have to:

a) Update the single line in my /etc/kernel/cmdline file to add “intel_iommu=on” as follows?

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=onb) Run /etc/kernel/postinst.d/zz-pve-efiboot after the above change?
yes

b) It is mentioned “If you are using systemd-boot make sure to sync the new initramfs to the bootable partitions” which links to another article which states that “pve-efiboot-tool refresh” is required to be run… Assume this needs to be run even if using a single disk zfs setup?
this should only be necessary if you have multiple disks with esps ( e.g. a raid1 zfs setup)
 
this post solved it for me. took a while to get here. please add this to the "gpu passthrough docs"
thanks!
 
Just wanted to say this fixed the issue for me also. thanks for the help
 
a) nano /etc/default/grub
b) Add “intel_iommu=on” to the following line. I notice that there is a second line here that mentions the ZFS rpool so should “intel_iommu=on” also be added to that line?

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
A little old post by worth a shot
Did you enter whatever boot parameters like here intel_iommu=on only in /etc/kernel/cmdline path or did you also added the option iommu in
/etc/default/grub also?

So was the final conf files like this exactly?
under /etc/default/grub
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on" Did you put it here also

under
/etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on like this?

or?
root=ZFS=rpool/ROOT/pve-1 boot=zfs

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" with quotation or not?

Thank you
 
It is mentioned “If you are using systemd-boot make sure to sync the new initramfs to the bootable partitions” which links to another article which states that “pve-efiboot-tool refresh” is required to be run… Assume this needs to be run even if using a single disk zfs setup?

this should only be necessary if you have multiple disks with esps ( e.g. a raid1 zfs setup)
Well since I am using a zfs mirror what command do I need to use to update both disks ?
 
It is mentioned “If you are using systemd-boot make sure to sync the new initramfs to the bootable partitions” which links to another article which states that “pve-efiboot-tool refresh” is required to be run… Assume this needs to be run even if using a single disk zfs setup?


Well since I am using a zfs mirror what command do I need to use to update both disks ?
pve-efiboot-tool refresh, which is nowadays automatically done during update-initramfs -u. This assumes that you have initialized multiple ESP partitions.
 
  • Like
Reactions: ieronymous
I have already done that so you are referring to the below
In setups with redundancy (RAID1, RAID10, RAIDZ*) all bootable disks (those being part of the first vdev) are partitioned with an ESP. This ensures the system boots even if the first boot device fails. The ESPs are kept in sync by a kernel postinstall hook script /etc/kernel/postinst.d/zz-pve-efiboot. The script copies certain kernel versions and the initrd images to EFI/proxmox/ on the root of each ESP and creates the appropriate config files in loader/entries/proxmox-*.conf. The pve-efiboot-tool script assists in managing both the synced ESPs themselves and their contents.
The ESPs are not kept mounted during regular operation, in contrast to grub, which keeps an ESP mounted on /boot/efi. This helps to prevent filesystem corruption to the vfat formatted ESPs in case of a system crash, and removes the need to manually adapt /etc/fstab in case the primary boot device fails.

So if the ESPs are not kept mounted how to check it ? This link only clarifies the boot method you are using not what I am asking to know (based on you comment of course) <<This assumes that you have initialized multiple ESP partitions.>>

So any further elaboration about this ...What do I have to do in order to find out
 
Please show the output of pve-efiboot-tool refresh. It should show you that it updated multiple ESP partitions with multiple kernel versions. If it does not, then you might need to create, format, and/or initialize additional ESP partitions using gdisk and the pve-efiboot-tool.
More information can be found in the documentation: Setting up a new partition for use as synced ESP.
 
Please show the output of pve-efiboot-tool refresh.
Code:
pve-efiboot-tool refresh
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/5F09-7C0F
        Copying kernel and creating boot-entry for 5.4.73-1-pve
        Copying kernel and creating boot-entry for 5.4.78-2-pve
Copying and configuring kernels on /dev/disk/by-uuid/5F09-DA43
        Copying kernel and creating boot-entry for 5.4.73-1-pve
        Copying kernel and creating boot-entry for 5.4.78-2-pve

According to the above it keeps 2 kernel versions and sees on /dev/disk/by-uuid/5F09-DA43 which seems like one disk but probably this is the 2 disks in mirror so I dont know how this is useful. Inform me which line exactly helped you understand what

Thank you
 
Last edited:
It looks like you have two ESP partitions:
Copying and configuring kernels on /dev/disk/by-uuid/5F09-7C0F
Copying and configuring kernels on /dev/disk/by-uuid/5F09-DA43
Please verify with ls -ahl /dev/disk/by-uuid/5F09-* that these partitions are on different drives. If they are not, then something is wrong.
 
It looks like you have two ESP partitions:
Copying and configuring kernels on /dev/disk/by-uuid/5F09-7C0F
Copying and configuring kernels on /dev/disk/by-uuid/5F09-DA43
Please verify with ls -ahl /dev/disk/by-uuid/5F09-* that these partitions are on different drives. If they are not, then something is wrong.
Code:
ls -ahl /dev/disk/by-uuid/5F09-*
lrwxrwxrwx 1 root root 10 Dec 16 14:16 /dev/disk/by-uuid/5F09-7C0F -> ../../sda2
lrwxrwxrwx 1 root root 10 Dec 16 14:16 /dev/disk/by-uuid/5F09-DA43 -> ../../sdb2

...and it is 2 different disks since during Proxmox installation I ve chose to continue with zfs mirror
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!