VM won't boot after adding PCI passthrough of GPU

potts_24

New Member
Jun 11, 2024
2
0
1
Thank you in advance for taking time to support me on this, especially as a first time user of Proxmox. I've spent days trying to get this working before bugging the community.

I've been through several tutorials (including this one) and community posts for GPU passthrough, but no matter what I try I can't get my VM to even boot once the PCI passthrough is added to the system hardware list of the VM. The system runs fine when no GPU is passed through and is running in the default display mode. I have tried the GPU in the primary "top slot" 16x PCIe and also the secondary 16x slot with no benefits to either. The motherboard BIOS is set to run the iGPU as the primary display and does output the host.

Along the way, I had an issues causing windows 11 system repair loops, but I was able to fix that by adjusting to 'x86-64-v3' from 'host' and have been stable since.

If there is any information I haven't provided that is useful, please let me know and I'll edit the post to include more detail.

Thank you again!


System
Hardware
  • Intel 14900K
  • ASUS Pro WS W680-ACE
  • 4x Micron 32GB DDR5-5600 ECC UDIMM 2Rx8 CL46
  • 1x ASUS RTX 4090 ROG Strix OC 24GB (sits in IOMMU Group 17 along with it's audio controller only)
Software
  • Proxmox: VE 8.2.2
  • Kernel: Linux 6.8.4-3-pve (2024-05-02T11:55Z)
  • Boot Mode: EFI
  • Manager Version: pve-manager/8.2.2/9355359cd7afbae4
  • systemd-boot mode in use
VM
  • Windows 11 Pro
  • Single 100GB Virtio0 disk running on a 4TB Sabrent nvme drive, EFI disk and TPM also running on the same disk
  • 30 cores running x86-64-v3
  • 16GB of RAM allocated currently but will expand to 120GB

Host Config Files

/etc/kernel/cmdline
Code:
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on initcall_blacklist=sysfb_init

/etc/modprobe.d/vfio.conf
Code:
options vfio-pci ids=10de:2684,10de:22ba
softdep nouveau pre: vfio-pci
softdep nvidia pre: vfio-pci
softdep nvidiafb pre: vfio-pci
softdep nvidia_drm pre: vfio-pci
softdep drm pre: vfio-pci

/etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

/etc/modprobe.d/kvm.conf
Code:
options kvm ignore_msrs=1 report_ignored_msrs=0

/etc/modprobe.d/blacklist.conf
empty

/etc/default/grub
Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
#GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
GRUB_CMDLINE_LINUX=""


Host dmesg's

root@pve:~# dmesg | grep -E "DMAR|IOMMU"
Code:
[    0.003743] ACPI: DMAR 0x00000000703F9000 000088 (v01 INTEL  EDK2     00000002      01000013)
[    0.003771] ACPI: Reserving DMAR table memory at [mem 0x703f9000-0x703f9087]
[    0.128754] DMAR: IOMMU enabled
[    0.287539] DMAR: Host address width 39
[    0.287540] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.287545] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[    0.287547] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.287550] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[    0.287552] DMAR: RMRR base: 0x0000007c000000 end: 0x000000807fffff
[    0.287555] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.287556] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.287557] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.289087] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.417981] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    0.475838] DMAR: No ATSR found
[    0.475839] DMAR: No SATC found
[    0.475841] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.475841] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.475843] DMAR: IOMMU feature nwfs inconsistent
[    0.475844] DMAR: IOMMU feature dit inconsistent
[    0.475845] DMAR: IOMMU feature sc_support inconsistent
[    0.475846] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.475847] DMAR: dmar0: Using Queued invalidation
[    0.475851] DMAR: dmar1: Using Queued invalidation
[    0.479987] DMAR: Intel(R) Virtualization Technology for Directed I/O

root@pve:~# dmesg | grep -i vfio
Code:
[    2.558033] VFIO - User Level meta-driver version: 0.3
[    2.562568] vfio-pci 0000:02:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=none
[    2.562668] vfio_pci: add [10de:2684[ffffffff:ffffffff]] class 0x000000/00000000
[    2.610692] vfio_pci: add [10de:22ba[ffffffff:ffffffff]] class 0x000000/00000000
[    2.908999] vfio-pci 0000:02:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=none
[   65.304952] vfio-pci 0000:02:00.1: enabling device (0000 -> 0002)

root@pve:~# lspci -nnk | grep 'NVIDIA'
Code:
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation AD102 [GeForce RTX 4090] [10de:2684] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation AD102 High Definition Audio Controller [10de:22ba] (rev a1)

root@pve:~# cat /proc/cmdline
Code:
initrd=\EFI\proxmox\6.8.4-3-pve\initrd.img-6.8.4-3-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on initcall_blacklist=sysfb_init
 
Last edited:
Continued to attempt to get this working, updated to Proxmox: VE 8.2.4 but with no change.

I can't understand why adding the GPU causes a complete no boot, CPU usage is completely flatlined and no video output from the GPU.
 
I can't understand why adding the GPU causes a complete no boot, CPU usage is completely flatlined and no video output from the GPU.
Check journalctl from around the time of starting the VM (use the arrow keys to scroll). There should be relevant lines (that don't contain the strings you grep for).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!