IGD Passthrough setup broke with Proxmox 6.2

adamb

Famous Member
Mar 1, 2012
1,322
72
113
I had a 6 setups update to 6.2 last night that are having some issues booting a VM with IGD passthrough.

Here is what I get with the first boot of the VM.

kvm: -device vfio-pci,host=00:02.0,addr=0x18,x-vga=on,x-igd-opregion=on: vfio 0000:00:02.0: failed to open /dev/vfio/1: No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

If I then use the GUI to add the PCI device and start the VM, it creates /dev/vfio/1 but I don't get a display on the physical monitor. So I then delete the line I just added so it uses my original args command and the VM will fire up and display on the physical monitor with no issues.

I tried going back to a older kernel without any luck so I think this is qemu specific. Any help is greatly appreciated as this is a very urgent issue for us.
 
Ok, so when I went back a kernel, I went back to the latest Proxmox5 kernel (4.15.18-28-pve). However, when I go back to 5.3.18-3-pve the issue is resolved and the vm starts and displays as expected.

So this does look like some type of kernel issue with 5.4.34-1-pve
 
Unfortunately I have the same problem.

Relevant startup line: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1
And response:
failed to open /dev/vfio/1: No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

I went back to 5.3.18-3-pve, but it didn't resolve the issue.
 
could you please post the output of dmesg with the new kernel, as well as the iommu groups

ls /sys/kernel/iommu_groups/*/devices/

?
 
qemu-server config file: (IGD passthrough is done using legacy mode, worked perfectly on proxmox 6.1 with full 4K display)
Code:
agent: 1,fstrim_cloned_disks=1
args: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1 -device vfio-pci,host=00:1f.3 -cpu host,hv_spinlocks=0x4096,hv_relaxed,hv_vapic,hv_vpindex,hv_tlbflush,hv_ipi,hv_runtime,hv_time,hv_synic,hv_stimer,hv_vendor_id='KVM Hv'
bootdisk: scsi0
cores: 2
cpu: host
cpuunits: 4096
ide2: local:iso/virtio-win-0.1.173.iso,media=cdrom,size=385296K
machine: pc-i440fx-2.2
memory: 8192
name: htpc
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0
numa: 1
onboot: 1
ostype: win10
protection: 1
runningmachine: pc-i440fx-2.2
scsi0: local-zfs:vm-100-disk-0,discard=on,size=64G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=438b2092-f8d2-47dd-aab5-eaedb601d0bd
sockets: 1
startup: order=4
tablet: 0
usb0: host=045e:0800
usb1: host=1-1,usb3=1
usb2: host=1-2,usb3=1
vga: none

IOMMU groups:
Code:
root@pve:~# ls /sys/kernel/iommu_groups/*/devices/
/sys/kernel/iommu_groups/0/devices/:
0000:00:00.0

/sys/kernel/iommu_groups/10/devices/:
0000:00:1c.5

/sys/kernel/iommu_groups/11/devices/:
0000:00:1c.6

/sys/kernel/iommu_groups/12/devices/:
0000:00:1c.7

/sys/kernel/iommu_groups/13/devices/:
0000:00:1d.0

/sys/kernel/iommu_groups/14/devices/:
0000:00:1f.0  0000:00:1f.2  0000:00:1f.3  0000:00:1f.4

/sys/kernel/iommu_groups/15/devices/:
0000:00:1f.6

/sys/kernel/iommu_groups/16/devices/:
0000:05:00.0

/sys/kernel/iommu_groups/17/devices/:
0000:06:00.0

/sys/kernel/iommu_groups/1/devices/:
0000:00:02.0

/sys/kernel/iommu_groups/2/devices/:
0000:00:08.0

/sys/kernel/iommu_groups/3/devices/:
0000:00:14.0

/sys/kernel/iommu_groups/4/devices/:
0000:00:16.0

/sys/kernel/iommu_groups/5/devices/:
0000:00:17.0

/sys/kernel/iommu_groups/6/devices/:
0000:00:1b.0

/sys/kernel/iommu_groups/7/devices/:
0000:00:1b.2

/sys/kernel/iommu_groups/8/devices/:
0000:00:1b.4

/sys/kernel/iommu_groups/9/devices/:
0000:00:1c.0

contents of /dev/vfio:
Code:
root@pve:~# ls -1 /dev/vfio
17
vfio

full dmesg:
dmesg
 
Unfortunately I have the same problem.

Relevant startup line: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1
And response:
failed to open /dev/vfio/1: No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

I went back to 5.3.18-3-pve, but it didn't resolve the issue.

Interesting I had 7 machines I was able to correct with the 5.3.18-3-pve kernel.

With 5.4.34-1-pve if I add the video adapter as a PCI device in the GUI and start the VM one time, /dev/vfio/1 gets created and then I can boot the VM with the original args line.

Hoping the dev's can figure this one out.
 
  • Like
Reactions: mash2u
i guess you configured the device to be bound to vfio-pci via /etc/modprobe.d since kernel 5.4 the vfio is not a module but built into the kernel, so you have to make those settings on the kernel commandline (so that the device gets bound to vfio-pci on boot) if the device is configured with 'hostpciX' on vm start we rebind the device to the module automatically, but this does not happen when using the 'args' method
 
I have to correct my earlier statement, the VM started on 5.3.18-3-pve after changing booted kernel, but it is stuck on endless bootloop, 100% usage of 1 core, no output on hdmi. I did not rollback any packages to earlier version, just edited boot options.

Contents of relevant boot files:
/etc/modprobe.d/kvm.conf
Code:
options kvm ignore_msrs=1
options kvm report_ignored_msrs=0

/etc/modprobe.d/pve-blacklist.conf
Code:
# This file contains a list of modules which are not supported by Proxmox VE

# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb

# turn off Intel UHD Graphics
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915

# turn off Intel HD Audio
blacklist i2c_i801
blacklist i2c_smbus

# turn off wifi
blacklist iwlwifi

/etc/modprobe.d/vfio.conf
Code:
options vfio-pci ids=8086:3e92,8086:a2c9,8086:a2a1,8086:a2f0,8086:a2a3,8086:24fd

/etc/modules
Code:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Generated by sensors-detect on Sat Jul 20 17:03:42 2019
# Chip drivers
coretemp

/etc/default/grub
Code:
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs kvm.ignore_msrs=1 intel_iommu=on video=vesafb:off,efifb:off"

lspci -v (with 5.4 kernel)
lspci
 
i guess you configured the device to be bound to vfio-pci via /etc/modprobe.d since kernel 5.4 the vfio is not a module but built into the kernel, so you have to make those settings on the kernel commandline (so that the device gets bound to vfio-pci on boot) if the device is configured with 'hostpciX' on vm start we rebind the device to the module automatically, but this does not happen when using the 'args' method

Yikes that doesn't sound very promising. What are the chances of getting hostpci to work properly? I would love to just add a hostpci line but the display never works.
 
Hey, first post on here. Fairly new to proxmox - been using it quite extensively for over a month. I was on 6.1-8 and I managed to get iGPU passthrough in that using the instructions on the wiki (and a few pages on this forum). However, I did an upgrade to 6.2-4 which broke iGPU to the VM (an Ubuntu 18.04-4).

When I start the VM, I get the following:

Code:
kvm: -device vfio-pci,host=00:02.0: vfio 0000:00:02.0: failed to open /dev/vfio/2: No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

Then using @adamb instructions above, I add a PCI device using the Web GUI with All functions and Primary GPU checked and try to start, it creates the device but the VM still doesn't start

Code:
kvm: -device vfio-pci,host=00:02.0: vfio 0000:00:02.0: device is already attached
TASK ERROR: start failed: QEMU exited with code 1

But now, when you delete the PCI device you just added above, and start the VM, it works! And it continues to work till the proxmox host is rebooted.

P.S. I've added iGPU to a remote headless server for the purpose of transcoding. I don't have a physical monitor attached. If it helps, I'm attaching additional information below:

/etc/modprobe.d/blacklist-hetzner.conf

Code:
### silence any onboard speaker
blacklist pcspkr
blacklist snd_pcsp
### i915 driver blacklisted due to various bugs
### especially in combination with nomodeset
blacklist i915
### mei driver blacklisted due to serious bugs
blacklist mei
blacklist mei-me
blacklist sm750fb

/etc/modprobe.d/vfio.conf
Code:
options vfio-pci ids=8086:3e98

/etc/default/grub
Code:
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

cat /etc/modules-load.d/vfio.conf
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

/etc/pve/qemu-server/100.conf
Code:
agent: 1
args: -device vfio-pci,host=00:02.0
bootdisk: scsi0
cores: 12
cpu: host
ide2: none,media=cdrom
memory: 38912
name: 18.04-Mediabox
net0: virtio=00:50:56:00:D9:B6,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: Cloudbox2
scsi0: local:100/vm-100-disk-0.qcow2,discard=on,size=953G
scsihw: virtio-scsi-pci
smbios1: uuid=0a690f9f-d5cf-4822-a401-80f4b1c7b0d4
sockets: 1
vmgenid: 945e7dbc-c363-4207-a2bd-610cd95c30a9

Now for output of the two commands requested above:

#1 ls /sys/kernel/iommu_groups/*/devices/

Code:
/sys/kernel/iommu_groups/0/devices/:
0000:00:00.0

/sys/kernel/iommu_groups/10/devices/:
0000:02:00.0

/sys/kernel/iommu_groups/1/devices/:
0000:00:01.0  0000:01:00.0

/sys/kernel/iommu_groups/2/devices/:
0000:00:02.0

/sys/kernel/iommu_groups/3/devices/:
0000:00:12.0

/sys/kernel/iommu_groups/4/devices/:
0000:00:14.0  0000:00:14.2

/sys/kernel/iommu_groups/5/devices/:
0000:00:16.0

/sys/kernel/iommu_groups/6/devices/:
0000:00:17.0

/sys/kernel/iommu_groups/7/devices/:
0000:00:1b.0

/sys/kernel/iommu_groups/8/devices/:
0000:00:1d.0

/sys/kernel/iommu_groups/9/devices/:
0000:00:1f.0  0000:00:1f.4  0000:00:1f.5  0000:00:1f.6

#2 Full dmesg
 
I resolved the issue of VM refusing to start by adding vfio-pci.ids=XXX to:
/etc/default/grub
Code:
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs kvm.ignore_msrs=1 intel_iommu=on video=vesafb:off,efifb:off vfio-pci.ids=8086:3e92,8086:a2c9,8086:a2a1,8086:a2f0,8086:a2a3,8086:24fd"

The problem still holds that windows doesn't boot, 1 core is pinned to 100% utilization. I will track it later on...
 
Seems like the only option of successfully passing through is without specifying "addr=0x02", the windows then boots successfully, but we need second virtual card. Yikes, not good.
 
i guess you configured the device to be bound to vfio-pci via /etc/modprobe.d since kernel 5.4 the vfio is not a module but built into the kernel, so you have to make those settings on the kernel commandline (so that the device gets bound to vfio-pci on boot) if the device is configured with 'hostpciX' on vm start we rebind the device to the module automatically, but this does not happen when using the 'args' method
Thank you. So the solution for me (given it's a headless machine), was to simply do the following:
1) Moved /etc/modprobe.d/vfio.conf to /etc/modprobe.d/vfio.conf.old
2) Removed args: from /etc/pve/qemu-server/100.conf
3) Edited /etc/default/grub by adding vfio-pci.ids=8086:3e98 to the end of GRUB_CMDLINE_LINUX= ""
4) Updated grub and reboot node
5) Add PCI device to host via Web GUI and select All Functions (not Primary GPU)

Not sure if this helps somebody in a similar situation
 
Seems like the only option of successfully passing through is without specifying "addr=0x02", the windows then boots successfully, but we need second virtual card. Yikes, not good.
Thank you. So the solution for me (given it's a headless machine), was to simply do the following:
1) Moved /etc/modprobe.d/vfio.conf to /etc/modprobe.d/vfio.conf.old
2) Removed args: from /etc/pve/qemu-server/100.conf
3) Edited /etc/default/grub by adding vfio-pci.ids=8086:3e98 to the end of GRUB_CMDLINE_LINUX= ""
4) Updated grub and reboot node
5) Add PCI device to host via Web GUI and select All Functions (not Primary GPU)

Not sure if this helps somebody in a similar situation

I want to confirm that these steps have resolved my issues, but I still have to use the args option in the VM config to actually get a display on a physical monitor.

1) Moved /etc/modprobe.d/vfio.conf to /etc/modprobe.d/vfio.conf.old
2) Edited /etc/default/grub by adding vfio-pci.ids=8086:3e98 to the end of GRUB_CMDLINE_LINUX= ""
3) Updated grub and reboot node

Now the VM will boot with no issues after a fresh host reboot and I am getting a display on a physical monitor.
 
Last edited:
I want to confirm that these steps have resolved my issues, but I still have to use the args option in the VM config to actually get a display on a physical monitor.

1) Moved /etc/modprobe.d/vfio.conf to /etc/modprobe.d/vfio.conf.old
2) Edited /etc/default/grub by adding vfio-pci.ids=8086:3e98 to the end of GRUB_CMDLINE_LINUX= ""
3) Updated grub and reboot node

Now the VM will boot with no issues after a fresh host reboot and I am getting a display on a physical monitor.

Thank you for the hint. I managed to passthrough GPU, but I am stuck with Audio. As I would like to use KODI on Ubuntu in the VM.
I added the onboard Intel Audio adress to Grub separated by comma. When I add the device 00:1f.3 via Webinterface the system completely stucks. I cannot access via Webinterface of any ssh to VM, host or containers. Today I recognized that VM is booting as usual, but stucks with the network interface, so I guess adding the audio device to the vm does crash anything to the network interface.
 
i guess you configured the device to be bound to vfio-pci via /etc/modprobe.d since kernel 5.4 the vfio is not a module but built into the kernel, so you have to make those settings on the kernel commandline (so that the device gets bound to vfio-pci on boot) if the device is configured with 'hostpciX' on vm start we rebind the device to the module automatically, but this does not happen when using the 'args' method

Hi there, I want to use the 'hostpciX' command to replace args, but using it resulted in the following error:

Code:
root@pve:~# qm start 100
kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci2,bus=pci.0,addr=0x1b,romfile=/usr/share/kvm/vgarom.bin: Failed to mmap 0000:00:02.0 BAR 2. Performance may be slow

Code:
boot: cdn
bootdisk: sata1
cores: 4
cpu: host
hostpci0: 04:00.1
hostpci1: 00:12.0
hostpci2: 00:02,romfile=vgarom.bin
hostpci3: 00:0e
hostpci4: 03:00.0
memory: 4096
name: DS918
numa: 0
ostype: l26
sata1: local-lvm:vm-100-disk-1,size=52M
scsihw: virtio-scsi-pci
smbios1: uuid=41f33cca-011d-4699-beb4-ea6df94622b7
sockets: 1
vmgenid: 37d6caba-9e80-4cfb-8b08-d5d2a245a376
vga: none

http://vfio.blogspot.com/2016/07/
According to Alex Williamson, intelGPU is hardcoded to work at bus 0 device 2, but mine is at 27 (assigned automatically by qemu).
Is there any way to specify that in 'hostpciX'?

Code:
  Bus  0, device  27, function 0:
    VGA controller: PCI device 8086:5a85
      PCI subsystem 1849:5a85
      IRQ 11.
      BAR0: 64 bit memory at 0xfc000000 [0xfcffffff].
      BAR2: 64 bit prefetchable memory at 0xe0000000 [0xefffffff].
      BAR4: I/O at 0xe000 [0xe03f].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0000fffe].
      id "hostpci2"

BTW, using the args config, it was working in legacy mode before 5.4 kernel with following config, if there's any way where one can specify the bus address in 'hostpciX' mode I think it will work.

Code:
boot: cdn
bootdisk: sata1
cores: 4
cpu: host
args: -device vfio-pci,hostpci=00:02,addr=0x02,x-igd-opregion=on,x-idg-gms=1,romfile=vgarom.bin
hostpci1: 00:0e
hostpci2: 04:00.1
hostpci3: 00:12.0
hostpci4: 03:00.0
memory: 4096
name: DS918
numa: 0
ostype: l26
sata1: local-lvm:vm-100-disk-1,size=52M
scsihw: virtio-scsi-pci
smbios1: uuid=41f33cca-011d-4699-beb4-ea6df94622b7
sockets: 1
vga: none
vmgenid: 37d6caba-9e80-4cfb-8b08-d5d2a245a376
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!