Problem with pass-through - cannot change initramfs

rysiusek

New Member
Apr 29, 2023
4
0
1
Hi,
I tried to run pass-through GPU on my proxmox.
That is my secound time configuration, last week I reinstalled proxmox because I had the same error - I thought that I made same mistakes configuration.
Following this thread:
https://forum.proxmox.com/threads/help-with-pass-through-pcie-for-j5005-igpu.111401/
I install old kernel 5.11.22-7-pve that should works.

On this part I had error:
Code:
root@prox:~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-5.15.107-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.74-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.11.22-7-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

Issue is reported because I installed proxmox on mmcblk0?

My proxmox configuration of thin client is Dell wyse 5070 with Intel® Celeron J4105 .
Followed this:
https://pve.proxmox.com/wiki/PCI(e)_Passthrough

Proxmox version:
Code:
proxmox-ve: 7.4-1 (running kernel: 5.11.22-7-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.11.22-7-pve: 5.11.22-12
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

/etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

fdisk -l
Code:
root@prox:~# fdisk -l
Disk /dev/mmcblk0: 14.68 GiB, 15758000128 bytes, 30777344 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt

Device           Start      End  Sectors  Size Type
/dev/mmcblk0p1      34     2047     2014 1007K BIOS boot
/dev/mmcblk0p2    2048  1050623  1048576  512M EFI System
/dev/mmcblk0p3 1050624 30777310 29726687 14.2G Linux LVM


Disk /dev/mapper/pve-swap: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 6.59 GiB, 7071596544 bytes, 13811712 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 119.24 GiB, 128035676160 bytes, 250069680 sectors
Disk model: SanDisk X600 M.2
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/Storage-vm--101--disk--0: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos

Device                                     Boot    Start      End  Sectors  Size Id Type
/dev/mapper/Storage-vm--101--disk--0-part1 *        2048 21606399 21604352 10.3G 83 Linux
/dev/mapper/Storage-vm--101--disk--0-part2      21608446 62912511 41304066 19.7G  5 Extended
/dev/mapper/Storage-vm--101--disk--0-part5      21608448 23607295  1998848  976M 82 Linux swap / Solaris
/dev/mapper/Storage-vm--101--disk--0-part6      23609344 62912511 39303168 18.7G 83 Linux

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/Storage-vm--102--disk--0: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt

Device                                       Start      End  Sectors  Size Type
/dev/mapper/Storage-vm--102--disk--0-part1      40     1063     1024  512K FreeBSD boot
/dev/mapper/Storage-vm--102--disk--0-part2    2048  2099199  2097152    1G FreeBSD swap
/dev/mapper/Storage-vm--102--disk--0-part3 2099200 33552383 31453184   15G FreeBSD ZFS

Partition 1 does not start on physical sector boundary.


Disk /dev/mapper/Storage-vm--102--disk--1: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/Storage-vm--102--disk--2: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt

Device                                       Start      End  Sectors  Size Type
/dev/mapper/Storage-vm--102--disk--2-part1    2048    67583    65536   32M EFI System
/dev/mapper/Storage-vm--102--disk--2-part2   67584   116735    49152   24M Linux filesystem
/dev/mapper/Storage-vm--102--disk--2-part3  116736   641023   524288  256M Linux filesystem
/dev/mapper/Storage-vm--102--disk--2-part4  641024   690175    49152   24M Linux filesystem
/dev/mapper/Storage-vm--102--disk--2-part5  690176  1214463   524288  256M Linux filesystem
/dev/mapper/Storage-vm--102--disk--2-part6 1214464  1230847    16384    8M Linux filesystem
/dev/mapper/Storage-vm--102--disk--2-part7 1230848  1427455   196608   96M Linux filesystem
/dev/mapper/Storage-vm--102--disk--2-part8 1427456 67108830 65681375 31.3G Linux filesystem
 
On this part I had error:
root@prox:~# update-initramfs -u -k all update-initramfs: Generating /boot/initrd.img-5.15.107-1-pve Running hook script 'zz-proxmox-boot'.. Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace.. No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync. update-initramfs: Generating /boot/initrd.img-5.15.74-1-pve Running hook script 'zz-proxmox-boot'.. Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace.. No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync. update-initramfs: Generating /boot/initrd.img-5.11.22-7-pve Running hook script 'zz-proxmox-boot'.. Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace.. No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
I don't see any errors. It just tells you that the host bootloader is not using ESP-partitions and/or is not managed by proxmox-boot-tool. Maybe you need to run an additional command to update the bootloader?
 
Hmm so how to update bootloader in other command.

Code:
root@prox:~# dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[    0.009548] ACPI: DMAR 0x0000000077F5C5B0 0000A8 (v01 INTEL  GLK-SOC  00000003 BRXT 0100000D)
[    0.009612] ACPI: Reserving DMAR table memory at [mem 0x77f5c5b0-0x77f5c657]
[    0.052741] DMAR: IOMMU enabled
[    0.171697] DMAR: Host address width 39
[    0.171699] DMAR: DRHD base: 0x000000fed64000 flags: 0x0
[    0.171709] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e
[    0.171714] DMAR: DRHD base: 0x000000fed65000 flags: 0x1
[    0.171724] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.171729] DMAR: RMRR base: 0x00000077ed8000 end: 0x00000077ef7fff
[    0.171733] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.171737] DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
[    0.171740] DMAR-IR: HPET id 0 under DRHD base 0xfed65000
[    0.171742] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.173684] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    2.448672] DMAR: No ATSR found
[    2.448681] DMAR: dmar0: Using Queued invalidation
[    2.448688] DMAR: dmar1: Using Queued invalidation
[    2.451398] DMAR: Intel(R) Virtualization Technology for Directed I/O
[  208.456472] DMAR: DRHD: handling fault status reg 2
[  208.456485] DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 0 [fault reason 02] Present bit in context entry is clear
Grub config
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
 
Last edited:
On this part I had error:
Code:
root@prox:~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-5.15.107-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.15.74-1-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-5.11.22-7-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

Issue is reported because I installed proxmox on mmcblk0?
Maybe, I don't know. Maybe the bootloader section of the Proxmox manual can help you configure Proxmox to update the right boot partition?
 
I have the same issue after last update of Proxmox, the previous working passthrough is broken and I can't run "update-initramfs -u -k all" without getting the Skipping ESP issue. Funny is it says it does skip it, but in fact it tries again several times and then stops the command.

Code:
root@pve:/etc/default# pveversion -verbose
proxmox-ve: 7.4-1 (running kernel: 5.15.116-1-pve)
pve-manager: 7.4-16 (running version: 7.4-16/0f39f621)
pve-kernel-5.15: 7.4-6
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.3-1
proxmox-backup-file-restore: 2.4.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-5
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 
Same issue here with a fresh Proxmox 8 install:

root@pve-multi:~# update-initramfs -k all -u

update-initramfs: Generating /boot/initrd.img-6.2.16-15-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

update-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

Any ideas please?
 
Finally, I did a new install of Proxmox 8 and got the server to show the desired results on passthrough. Nevertheless, on VM's (Windows 11) it doesn't work but crashes the Proxmox node.

No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync is has been still showed but only once and the command got finished.

Code:
dmesg | grep 'remapping'

dmesg | grep -E "DMAR|IOMMU"

intel_gpu_top
 
Last edited:
Also similar issue fairly new install of Proxmox 8 attemping GPU PCIe passthrough not sure what to do to fix the issue;

root@pve:~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-6.2.16-15-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.2.16-14-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.2.16-12-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
 
new instalation of proxmox 8 trying to gpu pci passthrough too and same error
root@gamingnode:~# proxmox-boot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
root@gamingnode:~# update-initramfs -u -k allupdate-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
 
Same here..... can't figure it out!
Code:
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
update-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
setupcon: The keyboard model is unknown, assuming 'pc105'. Keyboard may be configured incorrectly.
W: Possible missing firmware /lib/firmware/amdgpu/ip_discovery.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega10_cap.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/sienna_cichlid_cap.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/navi12_cap.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/aldebaran_cap.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/gc_11_0_0_toc.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/sienna_cichlid_mes1.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/sienna_cichlid_mes.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/navi10_mes.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/gc_11_0_3_mes.bin for module amdgpu
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
root@debianT430:~#

Not sure what the missing firmware messages are either - I thought the idea was to not load any firmware until the guest uses the card?
 
new instalation of proxmox 8 trying to gpu pci passthrough too and same error
root@gamingnode:~# proxmox-boot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
root@gamingnode:~# update-initramfs -u -k allupdate-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Looks like no one can help us with this issue? Is this a problem with Proxmox v8, did older versions have the issue?
 
Looks like no one can help us with this issue? Is this a problem with Proxmox v8, did older versions have the issue?
Hmmm... we must be missing something? Also I find that instructions for GPU passthrough vary depending on the source. I've been looking here at the local documentation (/pve-docs/chapter-qm.html#qm_pci_passthrough)

Maybe we should just move on to the next step ignoring those 'errors'?

I only came onboard with V 8.0, so not sure how it was before...
 
Last edited:
Blacklisting doesn't seem to be working wither......

Code:
01:00.1 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe [14e4:165f]
        DeviceName: NIC2
        Subsystem: Dell NetXtreme BCM5720 Gigabit Ethernet PCIe [1028:063b]
        Kernel driver in use: tg3
        Kernel modules: tg3
02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Vega 10 PCIe Bridge [1022:1470] (rev 01)
        Kernel driver in use: pcieport
03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Vega 10 PCIe Bridge [1022:1471]
        Subsystem: Advanced Micro Devices, Inc. [AMD] Vega 10 PCIe Bridge [1022:1471]
        Kernel driver in use: pcieport
04:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 [Instinct MI25/MI25x2/V340/V320] [1002:6860] (rev 01)
        Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Radeon PRO V320 [1002:0c35]
        Kernel modules: amdgpu
08:00.0 PCI bridge [0604]: Renesas Technology Corp. SH7758 PCIe Switch [PS] [1912:001d]
        Subsystem: Renesas Technology Corp. SH7758 PCIe Switch [PS] [1912:001d]
        Kernel driver in use: pcieport

I'm going to check out some youtube tutorials see where that gets me....
 
So, I don't know about you guys, but I have had some success. I watched a couple of videos - one pointed out that at this stage we should be able to pass anything through we want, without doing anything else. In practise its usually more complicated than that but I thought I' try passing the GPU to a Windows VM - and it worked! - kinda... Although I got a driver loaded, the VM crashed quite quickly. I think this ia because to 2 PCIE bridges that are on the card hadn't been passed through. I used these resources to get where I am now:

local documentation - PCI(e) Passthrough (/pve-docs/chapter-qm.html#qm_pci_passthrough) and...

https://pve.proxmox.com/wiki/PCI_Passthrough#Introduction and recommended by the video I watched...

https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/

I think we just need to move on from this ste and make sure we do all the other things suggested in those docs..

Hope this helps someone!
 
  • Like
Reactions: burddan and lucode
Guys this might help you. It worked for me.

:~# nano /etc/default/grub

intel_iommu=on and iommu=pt – in the end of the line

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on and iommu=pt"

: ~# update-grub

To check if IOMMU is enabled:

:~# dmesg | grep -e DMAR -e IOMMU

To find you graphic card use the following line:

:~# lspci | grep VGA

You should get:

00:02.0 VGA compatible controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630]

To black list the driver from host use:
:~# echo "options vfio-pci ids=8086:3e92 disable_vga=1"> /etc/modprobe.d/vfio.conf

Then to update modules:

:~# update-initramfs -u -k all

If it won’t work because there is no link to ESP partition then:

:~# proxmox-boot-tool status

To check which partition is /boot with vfat format:

:~# lsblk -o +FSTYPE

To initialize ESP sync first unmount boot partition:

:~# umount /boot/efi

Then link the vfat partiton with proxmox-boot-tool:

:~# proxmox-boot-tool init /dev/XXXXXXXX where XXXXXXXX is the name of vfat partiton from lsblk +FSYSTEM

Then:

:~# mount -a

Then to update modules:

:~# update-initramfs -u -k all



Reboot
 
@Dunadan

Thanks! Your tips how to re-init the boot partition helped me to fix the ESP partition relented issues on a fresh proxmox 8.1 install
 
Guys this might help you. It worked for me.

:~# nano /etc/default/grub

intel_iommu=on and iommu=pt – in the end of the line

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on and iommu=pt"

: ~# update-grub

To check if IOMMU is enabled:

:~# dmesg | grep -e DMAR -e IOMMU

To find you graphic card use the following line:

:~# lspci | grep VGA

You should get:

00:02.0 VGA compatible controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630]

To black list the driver from host use:
:~# echo "options vfio-pci ids=8086:3e92 disable_vga=1"> /etc/modprobe.d/vfio.conf

Then to update modules:

:~# update-initramfs -u -k all

If it won’t work because there is no link to ESP partition then:

:~# proxmox-boot-tool status

To check which partition is /boot with vfat format:

:~# lsblk -o +FSTYPE

To initialize ESP sync first unmount boot partition:

:~# umount /boot/efi

Then link the vfat partiton with proxmox-boot-tool:

:~# proxmox-boot-tool init /dev/XXXXXXXX where XXXXXXXX is the name of vfat partiton from lsblk +FSYSTEM

Then:

:~# mount -a

Then to update modules:

:~# update-initramfs -u -k all



Reboot
Actually, this made my proxmox install non-bootable. I had a grub command line available, but I didn't know exactly what to do. Now trying install with ZFS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!