[SOLVED] Win10 GPU-passthrough code43 with nVidia

matthei

Member
Aug 20, 2021
31
5
13
39
Hey, I've been trying to figure this out for a few days now and so I'm asking for help.
I have a fresh machine with a fresh installation of Proxmox 8.1.3
Running a Win10 guest, that was restored from Proxmox 7 (that machine broke). I think i initially made this VM/guest on Proxmox 6.

Windows does display the desktop through the GPU on one monitor, but I have 3 and the other two aren't detected.
Device manager shows "Code 43" for the graphics card.

I used the DDU (display driver uninstaller), uninstaled the drivers in safe mode.
When the Windows boots up, the windows update automaticaly downloads and installs the nvidia driver. And I get a notification that the driver has been installed, i click on it, and then the Nvidia settings just shows a pop up box: "NVIDIA display settings are not available. You are not currently using a display attached to an NVIDIA GPU".


I guess that I'm just missing something very small, I hope someone will know, thanks!


Host machine hardware:
MB: Supermicro X11SPM
CPU: Intel Xeon Silver 4214R
GPU: GeForce 1050 Ti


My current configuration in proxmox is:
/etc/default/grub
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init"
(I have tried various configurations.... after changing, i run "update-grub" command, and reboot)

Proxmox is installed as ext4 partition, running command "proxmox-boot-tool status" gives this output
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.

/etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
I have only changed these once, in the start... and i did run "update-initramfs -u -k all" after, and rebooted.


Blacklist:
/etc/modprobe.d/pve-blacklist.conf
Code:
blacklist nvidiafb
blacklist nouveau
blacklist nvidia


vm config
Code:
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;net0
cores: 6
cpu: host,hidden=1,flags=+pcid
efidisk0: local-lvm:vm-101-disk-0,size=4M
hostpci0: 0000:65:00,x-vga=1
machine: pc-q35-6.0
memory: 49152
name: win10-primary
net0: virtio=[redacted],bridge=vmbr0,firewall=1,tag=123
numa: 0
onboot: 1
ostype: win10
scsi0: local-lvm:vm-101-disk-1,cache=writeback,size=120G
scsi2: local-lvm:vm-101-disk-2,backup=0,size=51208M
scsi4: local-lvm:vm-101-disk-3,cache=writeback,discard=on,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=[redacted]
sockets: 1
usb0: host=1-1.1.1,usb3=1
usb1: host=1-1.1.2,usb3=1
usb2: host=1-1.1.3,usb3=1
usb3: host=1-1.1.4,usb3=1
usb4: host=1-1.2,usb3=1
usb5: host=1-1.3,usb3=1
usb6: host=1-1.4,usb3=1
vmgenid: [redacted]
 
Last edited:
I have also just tried:

Adding "nomodeset" to grub, no improvement.

The pci-e device had "all functions" enabled, so i tried removing it and adding the GPU and the "High definition audio controller" separately as two pci-e devices. No improvement.

Here is output from lspci -v
Code:
65:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti]
        Flags: fast devsel, IRQ 243, NUMA node 0, IOMMU group 8
        Memory at df000000 (32-bit, non-prefetchable) [size=16M]
        Memory at 38bfe0000000 (64-bit, prefetchable) [size=256M]
        Memory at 38bff0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at b000 [size=128]
        Expansion ROM at e0000000 [disabled] [size=512K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [420] Advanced Error Reporting
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] Secondary PCI Express
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau
 
Trying another thing mentioned in this thread: https://forum.proxmox.com/threads/pci-passthrough-blacklisting.120659/

lspci -n -s 65:00
Code:
65:00.0 0300: 10de:1c82 (rev a1)
65:00.1 0403: 10de:0fb9 (rev a1)

File /etc/modprobe.d/pve-blacklist.conf now looks like this
Code:
blacklist nvidiafb
blacklist nouveau
blacklist nvidia
options vfio-pci ids=10de:1c82,10de:0fb9
also ran update-initramfs -u -k all after, and rebooted.

No improvement.
EDIT: i later realized this should be in "vfio.conf"... i describe the step I took a few posts down.


perhaps worth mentioning: when windows is booting up, the Proxmox logo and loading animation is displayed on the monitor that is connected via DP. Then as windows finishes booting and the login screen comes up, the display switches to the monitor that's connected via HDMI. So that means both monitors are operational, and connected, and detected by the graphics card. But for some reason, windows doesn't detect it to be a fully working graphics card.
 
Last edited:
I realized that "options vfio-pci.." should be in "vfio.conf".
Btw, there was no vfio.conf file in the modprobe.d directory, so i created one.

First tried with vfio.conf file being this:
Code:
options vfio-pci ids=10de:1c82,10de:0fb9
No improvement.

Then added "disable_vga=1" so that the file looked like this
Code:
options vfio-pci ids=10de:1c82,10de:0fb9 disable_vga=1
No improvement.

I found this long comprehensive tutorial, going through it now
https://forum.proxmox.com/threads/p...x-ve-8-installation-and-configuration.130218/
 
Reading the tutorial at https://forum.proxmox.com/threads/p...x-ve-8-installation-and-configuration.130218/
I noticed that i have put the blacklisted drivers into "pve-blacklist.conf", but maybe they should be in "blacklist.conf" ?
Well, i added the lines to "blacklist.conf" ( nvidia, nouveau, nvidiafb, nvidia_drm), rebooted, still got Code 43.

Trying with installing Linux Mint now, to see if it detects the GPU properly.

(I hope it's not a problem that I'm posting these steps I've taken as separate replies)
 
Solved... I didn't tick the "PCI-E" checkbox. My 101.conf now has this line

Code:
hostpci0: 0000:65:00,pcie=1,x-vga=1

I knew it's something silly. I hope this thread helps anyone :)



I then removed all the various configurations that seemed unnecesary (I added them in an attempt to get it all to work). Maybe most of this is relevant to older versions of proxmox (before version 8). I wanted to have a minimum clean configuration, so if there's something to edit in the future, there are less variables to worry about. Here are my findings.

I made every change, then rebooted, and was looking whether windows guest still boots correctly with all 3 monitors working. After every change to the /etc/default/grub i ran the "update-grub" command.

1. Deleted file /etc/modprobe.d/vfio.conf which contained "options vfio-pci ids=10de:1c82,10de:0fb9 disable_vga=1"
STILL WORKS, so it's probably not needed.

2. deleted /etc/modprobe.d/blacklist.conf which contained lines "nvidia", "noveau" ...
STILL WORKS, so it's probably not needed

3. Deleted file /etc/modprobe.d/pve-blacklist.conf, which contained lines "nvidia", "noveau", ...
Still works, however, there was a temporary glitchy proxmox logo being displayed on the screens that are connected via the GPU. The glitchy logo was displayed for a few seconds, and then everything worked just fine. So i restored that file.

4. Deleted parameter from /etc/default/grub: pcie_acs_override=downstream,multifunction
STILL WORKS, so it's probably not needed

5. Deleted parameter from /etc/default/grub: iommu=pt
STILL WORKS, so it's probably not needed

5. Deleted parameter from /etc/default/grub: initcall_blacklist=sysfb_init
STILL WORKS, so it's probably not needed. I was a bit surprised...

5. Deleted parameter from /etc/default/grub: intel_iommu=on
STILL WORKS, so it's probably not needed. Also surprised, but I remember seeing in a discussion somewhere that it's no longer needed.
This thread here explains it: https://www.reddit.com/r/VFIO/comments/xw8mnc/is_the_parameter_intel_iommuon_needed_to_passthru/
Running command "dmesg | grep -E "DMAR|IOMMU" does output text where one line is: [ 0.106756] DMAR: IOMMU enabled
So looks like the new kernel or proxmox8 now enables this by default.
 
Last edited:
Hello,
Could you recap, starting from fresh proxmox install, what is the minimum required to make a windows VM with GPU passthrough ?
 
It looks like that having to edit a bunch of config files is a thing of the past.
I just did a fresh install of Proxmox 8.2.2, created a Windows machine, added the PCI-E GPU, and it just worked.

However, this shows up for about 2 seconds when starting the Windows guest machine, and can be fixed by editing pve-blacklist.conf (see instructions below)
462558045_3834792436806057_2872222764557153924_n.jpg


Hardware specs
Code:
Supermicro X11SPM-TPF-O
Intel(R) Xeon(R) Silver 4214R CPU
nvidia GeForce GTX 1050 Ti

Here's all the steps I just did:
Code:
Proxmox 8.2.2

Create new VM
- Machine type q35, Bios: OVMF (UEFI)
- Processor type: host
- Disk: SCSI-0, [y] discard, [y]IOThread, [y]SSD Emulation
(the rest mostly all defaults or whatever is necessary)

Using Windows10-22H2.iso
Using virtio-win-0.1.240.iso
During instalation click Load Driver
  - load: "virtio drive -> vioscsi/win10/amd64"
  - load: "virtio drive -> NetKVM/win10/amd64"

After instalation, enable remote desktop connections to the machine
and test to make sure you can connect.

In proxmox go to hardware tab, Add -> PCI Device
Raw device: 0000:65:00.0   (the ID of your PCIE device will probably be different)
[y] Primary GPU
[y] All functions (i think it's not necessary)

Go to proxmox console:
nano /etc/modprobe/pve-blacklist.conf
already contained  "blacklist nvidiafb"
added these 2 lines:
blacklist nouveau
blacklist nvidia

Restarted, works good, no longer that 2-second of the noise/glitch screen.
 
Hey, I need to know something for successful pass through, do we really need to have external monitor connected to hdmi or any dummy plug

I have tried on windows vm but got code 43 maybe it's need external monitor to properly configured my Nvidia gpu

By the way I'm using it in laptop as proxmox workstation desktop (gnome)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!