i'm doing GPU passthrough wrong

banishedmonk

New Member
Mar 24, 2022
13
2
3
i have a 5800x and a 1660 super trying to run gpu passthrough. I followed the instructions and when I try to run divinci resolve to test it errors stating there's no gpu. I cannot run the 1660 as primary because I get an error. in windows guest the 1660 super shows with an error 43, i disable/enable device with no luck. any suggestions?

server conf. file
boot: order=ide0;ide2;net0 cores: 8 efidisk0: vmdrive:vm-100-disk-1,efitype=4m,pre-enrolled-keys=1,size=4M hostpci0: 0000:0b:00,pcie=1 ide0: vmdrive:vm-100-disk-0,size=202G ide2: local:iso/virtio-win-0.1.208.iso,media=cdrom,size=543390K localtime: 1 machine: pc-q35-6.2 memory: 8192 meta: creation-qemu=6.2.0,ctime=1656683154 name: BI net0: e1000=BA:6F:9A:8D:48:28,bridge=vmbr0,firewall=1 numa: 0 ostype: win11 scsihw: virtio-scsi-pci smbios1: uuid=18547a6b-9b38-475a-aa42-fd4d4c24c931 sockets: 1 virtio2: /dev/disk/by-id/ata-WL2000GSA6472C_WOL240228354,size=1954390536K vmgenid: 8d1d2f19-9d67-47dc-b33a-8a67061d78f5

kvm.conf file
options kvm ignore_msrs=1

blacklist conf,
blacklist radeon blacklist nouveau blacklist nvidia blacklist nvidiafb

modules
vfio vfio_iommu_type1 vfio_pci vfio_virqfd

grub
GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` # GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt pcie_acs_override=downstrea> # GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on amd_iommu=on iommu=pt pcie_acs_override=downst> GRUB_CMDLINE_LINUX=""
 
Hey,

If your VM give you the #43 error on your graphic device, you've succefully done the configuration ^^
But, you're in a "bad" situation: NVIDIA try to lock theses customers GC for theses kind of situation, for promote their proffessionnal GC way.

You can by-pass that with hidding VM state to the VM.
Check that here: https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/
i went through the tutorial, even added the cpu args that the reddit suggested, minus not uploading the ROM file still not working. i tried this on a fresh vm windows 10 install as well.
 
in windows guest the 1660 super shows with an error 43
What version of Nvidia drivers are you running in Windows?

NOTE: That tutorial is starting to get out of date and has some unnecessary stuff in it.

Have you applied updates to Proxmox yet?

Can you dump the full out put of cat /proc/cmdline?
 
I have the same problem (error 43) with a NVIDIA Quadro 5000 on Windows 11. My Proxmox is up to date (7.2.7) so I don't understand. One idea?
 
What version of Nvidia drivers are you running in Windows?

NOTE: That tutorial is starting to get out of date and has some unnecessary stuff in it.

Have you applied updates to Proxmox yet?

Can you dump the full out put of cat /proc/cmdline?
This is what I got:

root@host:~# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off root@host:~#

I haven't applied updates to proxmox but I'm using the install that's about a month old.

I'm using the most up to date Nvidia drivers
 
BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off
video=vesafb:off,efifb:off does not work anymore, you need to write it as video=vesafb:off video=efifb:off but that won't help you when running kernel 5.15 with UEFI boot. The best work-around for kernel 5.15 (with UEFI) as reported by @dece03 here is initcall_blacklist=sysfb_init. (Also amd_iommu=on is completely unnecessary because it is on by default.)
Therefore, try this instead: BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init
 
Last edited:
  • Like
Reactions: nick.kopas
video=vesafb:off,efifb:off does not work anymore, you need to write it as video=vesafb:off video=efifb:off but that won't help you when running kernel 5.15 with UEFI boot. The best work-around for kernel 5.15 (with UEFI) as reported by @dece03 here is initcall_blacklist=sysfb_init. (Also amd_iommu=on is completely unnecessary because it is on by default.)
Therefore, try this instead: BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init
Do I nano and change the file that way?
 
video=vesafb:off,efifb:off does not work anymore, you need to write it as video=vesafb:off video=efifb:off but that won't help you when running kernel 5.15 with UEFI boot. The best work-around for kernel 5.15 (with UEFI) as reported by @dece03 here is initcall_blacklist=sysfb_init. (Also amd_iommu=on is completely unnecessary because it is on by default.)
Therefore, try this instead: BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init
I will be interested in the solution so if you can be clear and precise in the explanation. Thanks!
 
i updated the grub, divinci resolved opened up (but force quitting) and i see the settings for nvidia control panel. i think it worked, i'll update with more results
 
  • Like
Reactions: nick.kopas
Yes, change it in your GRUB config. You can probably also omit pcie_acs_override=downstream,multifunction

So...
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt initcall_blacklist=sysfb_init"
I followed what you wrote. Windows detects a VGA card in the device manager but when I want to install the nvidia drivers, at the moment of detection, my screen goes black and the connection desktop disconnects to reconnect a few seconds later. The driver installation process is cancelled.

EDIT: I try on Windows 10. The driver installation process is a success. But when I want restart the VM, impossible to connect with microsoft remote desktop, except if I remove my graphics card from the settings.
 
Last edited:
I followed what you wrote. Windows detects a VGA card in the device manager but when I want to install the nvidia drivers, at the moment of detection, my screen goes black and the connection desktop disconnects to reconnect a few seconds later. The driver installation process is cancelled.

EDIT: I try on Windows 10. The driver installation process is a success. But when I want restart the VM, impossible to connect with microsoft remote desktop, except if I remove my graphics card from the settings.
Can you provide some additional details?
  • Hardware used in your Proxmox host.
  • Configuration for Proxmox and VFIO.
  • Configuration for VM.
 
Can you provide some additional details?
  • Hardware used in your Proxmox host.
  • Configuration for Proxmox and VFIO.
  • Configuration for VM.
Hardware used in your Proxmox host

- Motherboard ASUS Z9PR-D12
- 2x Intel Xeon E52650L v2
- 64 Gb RAM DDR3 ECC
- NVIDIA Quatro 5000 graphic card

Configuration for Proxmox and VFIO

Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt initcall_blacklist=sysfb_init"
GRUB_CMDLINE_LINUX=""

VFIO

Code:
options vfio-pci ids=10de:06d9,10de:0be5 disable_vga=1

Configuration for VM.

Code:
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0;ide0
cores: 8
cpu: host,hidden=1,flags=+pcid
efidisk0: vms:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide0: local:iso/virtio-win-0.1.217.iso,media=cdrom,size=519172K
ide2: local:iso/Win10_21H2_FrenchCanadian_x64.iso,media=cdrom,size=5438742K
machine: pc-q35-6.2
memory: 8048
meta: creation-qemu=6.2.0,ctime=1657333199
name: win10
net0: virtio=52:5A:62:1B:32:B5,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
scsi0: vms:vm-100-disk-1,cache=writeback,discard=on,size=80G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=3122a59e-ac37-4da4-b067-1c63068f6d18
sockets: 1
 
But when I want restart the VM, impossible to connect with microsoft remote desktop, except if I remove my graphics card from the settings.
Nothing above looks wrong. I'm assuming that VM config is after you've removed the GPU?

When you restart the VM, can you confirm it's actually running? I'm wondering if its just completely offline?
 
Nothing above looks wrong. I'm assuming that VM config is after you've removed the GPU?

When you restart the VM, can you confirm it's actually running? I'm wondering if its just completely offline?
Code:
cores: 8
cpu: host,hidden=1,flags=+pcid
efidisk0: vms:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:02:00.0
ide0: local:iso/virtio-win-0.1.217.iso,media=cdrom,size=519172K
ide2: local:iso/Win10_21H2_FrenchCanadian_x64.iso,media=cdrom,size=5438742K
machine: pc-q35-6.2
memory: 8048
meta: creation-qemu=6.2.0,ctime=1657333199
name: win10
net0: virtio=52:5A:62:1B:32:B5,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
scsi0: vms:vm-100-disk-1,cache=writeback,discard=on,size=80G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=3122a59e-ac37-4da4-b067-1c63068f6d18
sockets: 1
vmgenid: 61251d49-5f07-401b-a993-be306c7a17a0
 
Nothing above looks wrong. I'm assuming that VM config is after you've removed the GPU?

When you restart the VM, can you confirm it's actually running? I'm wondering if its just completely offline?
little novelty. Windows supports my graphics card and I can connect to Windows. On restart, the device manager gives me an error 43.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!