stop intel igpu from using VFIO and make it use original host i915 driver. i.e. revert passthrough

sunnbus

New Member
Mar 31, 2023
7
0
1
Hello,

After attempting to pass 13th gen igpu to windows 11 for a couple of weeks, gave up, purchased used AMD rx 6800, had same code 43 for AMD, eventually realized that disabling resizable BAR and 4g decoding resolved the issue for the AMD card. For the heck of it and without adding intel to blocklist or registering it with VFIO, simply used Proxmox GUI to pass intel through to windows as is and to my surprise, the code 43 for it was gone too.

Then followed one of the many guides to pass intel through to Truenas (blocklisted, registered with VFIO, etc) in order to decode media, but didn't work. Then, tried to pass through to windows 11 and this time, code 43.

My question is -- considering igpu passed through without using VFIO, how can I revert intel igpu back to using i915 kernel driver rather than the VFIO-pcie driver? removed blacklisting and vfio.conf igpu address and rebooted but still used VFIO driver. Also unbound from VFIO (echo 1 > /sys/bus/pci/drivers/vfio-pci/0000:00:02.0/remove) then modprobed it manually and it used i915 kernel but one I start the windows 11 VM, reverts back to using the VFIO driver and gives code 43.

Thank you
 
My question is -- considering igpu passed through without using VFIO, how can I revert intel igpu back to using i915 kernel driver rather than the VFIO-pcie driver? removed blacklisting and vfio.conf igpu address and rebooted but still used VFIO driver.
Did you run update-initramfs -u between making changes in /etc/modprobe.d/ and rebooting? Also, check cat /proc/cmdline.
Also unbound from VFIO (echo 1 > /sys/bus/pci/drivers/vfio-pci/0000:00:02.0/remove) then modprobed it manually and it used i915 kernel but one I start the windows 11 VM, reverts back to using the VFIO driver and gives code 43.
If your VM still does passthrough, then Proxmox will bind it to vfio-pci for you when starting the VM. Double check your VM configuration for passthrough of the Intel graphics.
 
  • Like
Reactions: sunnbus
Did you run update-initramfs -u between making changes in /etc/modprobe.d/ and rebooting?
Thanks for reply. Yes, I've done this.

check cat /proc/cmdline
The output is as follows: BOOT_IMAGE=/boot/vmlinuz-6.2.11-1-pve root=/dev/mapper/pve-root ro quiet iommu=pt intel_iommu=on video=efifb:off
If your VM still does passthrough, then Proxmox will bind it to vfio-pci for you when starting the VM. Double check your VM configuration for passthrough of the Intel graphics.
I am still passing through the AMD GPU but made certain the vm.conf, the blacklist.conf, and the vfio.conf contain nothing related to the intel igpu. Despite that, doing abovementioned steps, and manually unbinding, rebooting shows that intel igpu uses no driver. I then modprobe i915 and intel uses i915 kernel driver, but if I start up with the windows 11 VM, get a code 43 and when exit it, intel igpu back to using VFIO driver
 
The output is as follows: BOOT_IMAGE=/boot/vmlinuz-6.2.11-1-pve root=/dev/mapper/pve-root ro quiet iommu=pt intel_iommu=on video=efifb:eek:ff
video=efifb:off does not do anything anymore and you might as well remove it.
I am still passing through the AMD GPU but made certain the vm.conf, the blacklist.conf, and the vfio.conf contain nothing related to the intel igpu. Despite that, doing abovementioned steps, and manually unbinding, rebooting shows that intel igpu uses no driver. I then modprobe i915 and intel uses i915 kernel driver, but if I start up with the windows 11 VM, get a code 43 and when exit it, intel igpu back to using VFIO driver
That's weird. Please share all *.conf files from the /etc/modprobe.d/ directory and the VM configuration file.
 
-blacklist.conf
blacklist radeon
blacklist amdgpu
#blacklist nouveau
#blacklist nvidia
#blacklist i915
#blacklist snd_hda_intel
#blacklist sof_pci_dev

-iommu_unsafe_interrupts.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1

-kvm.conf
options kvm ignore_msrs=1

-pve-blacklist.conf (I never manually modified this file, just noticed it and will try removing the i915 blacklisting)
blacklist nvidiafb
blacklist i915

-vfio.conf (all ids are related to the AMD gpu)
options vfio-pci ids=1002:1478,1002:1479,1002:73bf,1002:ab28 disable_vga=1

-vm.conf
agent: 1
audio0: device=ich9-intel-hda,driver=none
balloon: 0
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: local-lvm:vm-106-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:03:00,pcie=1,x-vga=1
machine: pc-q35-7.2
memory: 8192
meta: creation-qemu=7.2.0,ctime=1678883808
name: win11sun
net0: e1000=66:25:fc:da:41:3b,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win11
scsi0: local-lvm:vm-106-disk-1,iothread=1,size=120G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=778138e8-91f1-43a9-849f-2a5e4612cb52
sockets: 1
startup: order=3
tpmstate0: local-lvm:vm-106-disk-2,size=4M,version=v2.0
vga: none
vmgenid: f6e91298-ebe4-41da-8ae3-3d87621ca8b0
 
There is blacklist i915, which would explain that i915 is not loaded automatically, but there is nothing in the VM configuration that should cause Proxmox to bind it to vfio-pci.
I do think you don't need Primary GPU (,x-vga=1) or ,hidden=1,flags=+pcid with AMD GPUs. With the newer kernels (5.19, 6.1, 6.2), I don't even need to blacklist amdgpu or early bind AMD GPUs to vfio-pci anymore.
Maybe you can share part of the Proxmox Syslog (or journalctl -eb 0) from around the time of first starting that VM after a host reboot? Are you sure it's not another VM, or maybe a cron job or other helper script that doing something to the Intel graphics?

EDIT: This is my (untouched) pve-blacklist.conf:
Bash:
# This file contains a list of modules which are not supported by Proxmox VE

# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
 
Last edited:
Thanks, i'll apply your recs regarding superfluous commands.

Yes. Don't know where that blacklisting came from and removed it. The driver then loads up on boot as it should (kernel in use i915), then I passthrough to vm, kernel driver changes to vfio (as it should correct?).

Seems like intel igpu may have worked that one time for some other reason because I reverted to how it used to be and still get code 43. Maybe should abandon passthrough to windows 11 and work on successful passthrough to my Truenas. That's the VM i need it to work in anyway.

proxmox syslog:
May 01 07:40:15 sun systemd[1]: Started 106.scope.
May 01 07:40:15 sun systemd-udevd[4807]: Using default interface naming scheme 'v247'.
May 01 07:40:15 sun systemd-udevd[4807]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun kernel: device tap106i0 entered promiscuous mode
May 01 07:40:15 sun systemd-udevd[4807]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun systemd-udevd[4807]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun systemd-udevd[4810]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun systemd-udevd[4810]: Using default interface naming scheme 'v247'.
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered blocking state
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered disabled state
May 01 07:40:15 sun kernel: device fwpr106p0 entered promiscuous mode
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered blocking state
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered forwarding state
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered disabled state
May 01 07:40:15 sun kernel: device fwln106i0 entered promiscuous mode
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered forwarding state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered disabled state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered forwarding state
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
May 01 07:40:16 sun kernel: vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100
May 01 07:40:16 sun pvedaemon[1466]: <root@pam> end task ----:qmstart:106:root@pam: OK
May 01 07:40:16 sun kernel: x86/split lock detection: #AC: CPU 3/KVM/4893 took a split_lock trap at address: 0x7ef1e050
May 01 07:40:16 sun kernel: x86/split lock detection: #AC: CPU 2/KVM/4892 took a split_lock trap at address: 0x7ef1e050
May 01 07:40:16 sun kernel: x86/split lock detection: #AC: CPU 1/KVM/4891 took a split_lock trap at address: 0x7ef1e050
May 01 07:40:18 sun pvedaemon[4920]: starting termproxy UPID:
May 01 07:40:18 sun pvedaemon[1465]: <root@pam> starting task UPID:
May 01 07:40:18 sun pvedaemon[1465]: <root@pam> successful auth for user
May 01 07:40:18 sun login[4925]: pam_unix(login:session): session opened for user
May 01 07:40:18 sun systemd-logind[1099]: New session 5 of user ----.
May 01 07:40:18 sun systemd[1]: Started Session 5 of user ----.
May 01 07:40:18 sun login[4930]: ----- LOGIN on '/dev/pts/0'
 
Yes. Don't know where that blacklisting came from and removed it. The driver then loads up on boot as it should (kernel in use i915), then I passthrough to vm, kernel driver changes to vfio (as it should correct?).
Yes, when you passthrough a device that Proxmox unbinds the current driver and binds that device to vfio-pci. Proxmox does not do the reverse when you shutdown the VM.
Seems like intel igpu may have worked that one time for some other reason because I reverted to how it used to be and still get code 43. Maybe should abandon passthrough to windows 11 and work on successful passthrough to my Truenas. That's the VM i need it to work in anyway.
I thought you did passthrough of an AMD GPU and the problem was that the Intel graphics also got bound to vfio-pci? Or are you trying to passthrough the Intel graphics (as well)? Maybe I'm confused.
May 01 07:40:15 sun systemd[1]: Started 106.scope.
May 01 07:40:15 sun systemd-udevd[4807]: Using default interface naming scheme 'v247'.
May 01 07:40:15 sun systemd-udevd[4807]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun kernel: device tap106i0 entered promiscuous mode
May 01 07:40:15 sun systemd-udevd[4807]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun systemd-udevd[4807]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun systemd-udevd[4810]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writ>
May 01 07:40:15 sun systemd-udevd[4810]: Using default interface naming scheme 'v247'.
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered blocking state
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered disabled state
May 01 07:40:15 sun kernel: device fwpr106p0 entered promiscuous mode
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered blocking state
May 01 07:40:15 sun kernel: vmbr1: port 4(fwpr106p0) entered forwarding state
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered disabled state
May 01 07:40:15 sun kernel: device fwln106i0 entered promiscuous mode
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 1(fwln106i0) entered forwarding state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered disabled state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered blocking state
May 01 07:40:15 sun kernel: fwbr106i0: port 2(tap106i0) entered forwarding state
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
May 01 07:40:16 sun kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
May 01 07:40:16 sun kernel: vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100
Looks like the integrated graphics is (00:02.0) is passed through as well? But I did not see it in the VM configuration you showed. That are the outputs of lspci -nnks 03:00.0 and lspci -nnks 00:02.0?
 
I thought you did passthrough of an AMD GPU and the problem was that the Intel graphics also got bound to vfio-pci? Or are you trying to passthrough the Intel graphics (as well)? Maybe I'm confused.
I gave up passing intel so bought AMD and amd passed through successfully. Then wanted to pass the intel through to a different VM (trueNAS). For the heck of it, passed intel through to windows 11 to see if this time, without preload vfio and blacklisting, intel loads successfully and it did. Then followed guides and preloaded vfio, blacklisted, etc but intel didn't pass through to Truenas successfully so tried to back step and load it successfully again on windows 11 to no avail. The goal though is to passthrough intel to truenas.
Looks like the integrated graphics is (00:02.0) is passed through as well? But I did not see it in the VM configuration you showed. That are the outputs of lspci -nnks 03:00.0 and lspci -nnks 00:02.0?
Sorry for confusion. I again tried to pass through the intel to windows to see if it is loaded successfully but it threw code 43. Kind of to use windows as a testbed for successful loading of the intel igpu in a VM becuase have more experience with windows OS than Linux. I will focus on passing it through to Truenas hopefully less of a headache than windows. Thank you for all your help and clarifications.
 
If you want to passthrough the boot GPU you need this work-around, but that will also prevent you from seeing Proxmox boot messages and a host console. Or boot with the AMD 6800, which should reset properly with your recent kernel (if you don't blacklist and don't bind to vfio-pci). Lost of options but in each case you need to have everything setup right before it will work.
 
If you want to passthrough the boot GPU you need this work-around, but that will also prevent you from seeing Proxmox boot messages and a host console. Or boot with the AMD 6800, which should reset properly with your recent kernel (if you don't blacklist and don't bind to vfio-pci). Lost of options but in each case you need to have everything setup right before it will work.
Didn't think about that one. When the intel worked in windows 11, I think I biosed the AMD as primary GPU, and when it worked, reverted intel to primary in BIOS and then got the intel code 43. I'll give it a try when get back from work. Thank you again.
 
Development -- work around didn't work but based on your input, changed bios igpu from being primary to not. When logging in to windows, still getting code 43 but this time, when I go to device manager, disable intel igpu driver and then reenable, driver loads properly.

For all of you who're having trouble passing through, follow the many guides but try this in addition if you're getting driver error codes or can't pass through. In bios:
1. disable resizable BAR
2. disable 4g decoding
doing this resolved code 43 for my Radeon Rx 6800

in addition if you have intel iGPU, in bios make sure it is not the primary GPU. This resolved code 43 for Raptor Lake (13600k) intel iGPU (UHD 770). Of note, you dont need an additional GPU to do this. After initial setup and establishing remote desktopping to Proxmox GUI/VMs, set igpu to secondary/auto and run headless and control remotely.
 
Development -- work around didn't work but based on your input, changed bios igpu from being primary to not. When logging in to windows, still getting code 43 but this time, when I go to device manager, disable intel igpu driver and then reenable, driver loads properly.

For all of you who're having trouble passing through, follow the many guides but try this in addition if you're getting driver error codes or can't pass through. In bios:
1. disable resizable BAR
2. disable 4g decoding
doing this resolved code 43 for my Radeon Rx 6800

in addition if you have intel iGPU, in bios make sure it is not the primary GPU. This resolved code 43 for Raptor Lake (13600k) intel iGPU (UHD 770). Of note, you dont need an additional GPU to do this. After initial setup and establishing remote desktopping to Proxmox GUI/VMs, set igpu to secondary/auto and run headless and control remotely.

I am searching for an answer how to get a IGPU (UHD 770) passthrough with video output into a vm and nothing works. Did you test your setup without a second gpu? Which motherboard do you use? I can't set ipgu as non primary in the motherboard if there is no second gpu. It's still Code 43. I have a Asus Tuf motherboard using latest Windows 11 22H2 v2. Always Code 43 - no video output. Same under Unraid. The frustrating part is, that ubuntu works flawless so it should be a problem in windows I guess. Someone, anyone got it to work? Alternative using SR-IOV is working but as there is no video output it's not for me (parsec / sunshine is to laggy)

Also I try to debug that issue, but I don't now where to search windows only says Code 43 but is there anywhere more information whats the problem
Also does proxmox log anything regards to this? Dmesg says

root@pve:~# dmesg | grep vfio-pci
[ 95.859451] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[ 95.859460] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[ 379.132887] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[ 379.132896] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[ 449.024387] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[ 449.024395] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[ 611.988833] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[ 611.988841] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[ 740.989076] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[ 740.989085] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[ 2689.808795] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[ 2689.808803] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[ 3691.652576] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[ 3691.652585] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[15099.157855] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[15099.157864] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[101879.609605] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x158
[101879.609614] vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x1e@0x180
[284869.344436] vfio-pci 0000:00:02.0: vgaarb: deactivate vga console
[284869.344439] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[284871.140371] vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100
[284881.921077] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x60cb


Windows Error log says
Miniport driver failed to start device with status
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!