[SOLVED] Dreaded "Code 43" for NVidia GPU returning after moving from ESXi to Proxmox

EpicLPer

Member
Sep 7, 2022
47
6
8
29
Austria
epiclper.com
Heya,

I've recently moved my main host from ESXi to Proxmox (homelab) and the dreaded "Code 43" error for my NVidia Quadro P2000 returned, the driver refuses to load once again. I already fought this error relentlessly on ESXi back then and after hours of trial and error got rid of it, but on Proxmox it returned with no end in sight :( It's most likely due to NVidia's VM detection, which should not be an issue anymore for quite a while now as "recent" drivers (like, 2 years by now?) should have removed it, but even the latest one from March 2024 still seems to detect things.
Already Google'd around quite a bit but so far haven't really found any solution other than people editing their GPU BIOS (which if possible I want to avoid, but if not I'll try my best to not screw the card up :) )

I already tried the following (most according to this guide: https://pve.proxmox.com/wiki/PCI_Passthrough#Nvidia_Tips):
  • disabled CSM in my host's BIOS which did seem to break console output, but that's a worry for later and is most likely just cause the GPU isn't properly working yet
  • tried adding the Vendor and Device ID
  • blacklisted Nouveau and Nvidia driver
  • Un- and Reinstalled the latest driver after removing it with DDU first

Does anyone have another possible solution for this?


My current VM settings are as follows:
/usr/bin/kvm -id 100 -name 'WinServer2022-1,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=XXXXXX' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/pve/vm-100-disk-1,size=131072' -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vendor_id=proxmox,hv_vpindex,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt' -m 20480 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=XXXXXXX' -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=0000:02:00.0,id=hostpci0.0,bus=pci.0,addr=0x10.0,multifunction=on' -device 'vfio-pci,host=0000:02:00.1,id=hostpci0.1,bus=pci.0,addr=0x10.1' -chardev 'socket,path=/var/run/qemu-server/100.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:4b76a06fbecb' -drive 'file=/var/lib/vz/template/iso/virtio-win.iso,if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'pvscsi,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/pve/vm-100-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' -device 'scsi-hd,bus=scsihw0.0,scsi-id=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' -drive 'file=/dev/HDD1-4TB-thin/vm-100-disk-0,if=none,id=drive-scsi1,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' -device 'scsi-hd,bus=scsihw0.0,scsi-id=1,drive=drive-scsi1,id=scsi1' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=XXXXXXX,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256' -rtc 'driftfix=slew,base=localtime' -machine 'hpet=off,type=pc-q35-8.1+pve0' -global 'kvm-pit.lost_tick_policy=discard'
 
Last edited:
My current VM settings are as follows:
hi,

the better way to post the vm settings is to use
Code:
qm config ID

that shows the pve source vm config, not the generated qemu commandline
 
Got it to work now after some tinkering :)

I'm not entirely sure which step helped, but I found this guide here which is talking about "hiding the fact it's a VM" and I tried a few steps from there and it seemed to work! Guide: https://forum.proxmox.com/threads/windows-11-vm-for-gaming-setup-guide.137718/

But, I think the main things that helped in my case (and in case someone else needs this):
  • Instead of using the "mapped device" feature from Proxmox I simply mapped the GPU as a raw PCI device. To do this add a PCI device to your VM, but choose the "RAW device" option instead. Add the GPU (not audio) first, choose "Primary Device" option and PCI Express under Advanced, then do the same for the audio device but leave out "Primary Device".
    Try your VM at this point, maybe it already works for you!
  • If it doesn't I've additionally added the following lines to /etc/modprobe.d/pve-blacklist.conf
    Code:
    blacklist nvidiafb
    blacklist nouveau
    blacklist nvidia*
  • If this still doesn't work I've also (according to the guide) changed the SCSI controller to "LSI 53C895A" and changed my hard drives from SCSI to SATA and "writeback"
  • Another step was to set the network card to "Intel E1000", tho I don't think that would make much of a difference at this point
I've also tried the "hidden=1" option and the BIOS manufacturer etc. but in the end removed them from my config again, the GPU is still working at this point.

I hope this helps someone else in the future! I think the key was raw mapping the GPU, but I'm not entirely sure in hindsight.
 
Last edited:
Got it to work now after some tinkering :)
great

I think the key was raw mapping the GPU, but I'm not entirely sure in hindsight.
maybe, but unlikely. if configured right, using a mapping vs the raw device results in the same qemu commandline. maybe there was something misconfigured with the mapping

if you post both the mapping config and the current vm config, we could verify that
 
great


maybe, but unlikely. if configured right, using a mapping vs the raw device results in the same qemu commandline. maybe there was something misconfigured with the mapping

if you post both the mapping config and the current vm config, we could verify that
For the mapping I've used the "Passthrough all functions" entry for my GPU and no other options, tho it did throw a warning "A selected device is not in a separate IOMMU group, make sure this is intended.". I have 2 GPUs in that server, both seem to report the same group via a command I forgot now ^^" Tho the command was listed in the guide I linked above.


The current VM config is:
Code:
agent: 1
bios: ovmf
boot: order=sata0
cores: 8
cpu: host
efidisk0: local-lvm:vm-100-disk-1,size=4M
hostpci0: 0000:02:00.0,pcie=1,x-vga=1
hostpci1: 0000:02:00.1,pcie=1
ide2: local:iso/virtio-win.iso,media=cdrom,size=715188K
machine: pc-q35-8.1
memory: 20480
meta: creation-qemu=8.1.5,ctime=1711913379
name: WinServer2022-1
net0: e1000=XXXXX,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win11
sata0: local-lvm:vm-100-disk-0,cache=writeback,discard=on,size=95G,ssd=1
sata1: HDD1-4TB-thin:vm-100-disk-0,cache=writeback,discard=on,size=960G
smbios1: uuid=XXXXX
sockets: 1
unused0: DS916_ProxmoxNFS:100/vm-100-disk-1.qcow2
unused1: DS916_ProxmoxNFS:100/vm-100-disk-0.qcow2
unused2: DS916_ProxmoxNFS:100/vm-100-disk-2.qcow2
vga: vmware
vmgenid: XXX
 
hostpci0: 0000:02:00.0,pcie=1,x-vga=1
hostpci1: 0000:02:00.1,pcie=1
i guess maybe this makes the difference?

when you used the whole devices originally, it puts both functions onto the same device and marks it 'multifunction' (like the real device)

you could test it by modifying the lines to
Code:
hostpci0: 0000:02:00,pcie=1,x-vga=1
(note the missing last part '.0'/'.1')

this will pass through all functions together

I have 2 GPUs in that server, both seem to report the same group via a command I forgot now ^^
do you passthrough both gpus? that won't work really (since we normally reset all devices in the iommugroup)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!