Infamous Code 43 Error

XTREEMMAK

New Member
May 21, 2020
2
0
1
37
Hey all,

Just started with Proxmox and for the most part, things have been well up till now. I'm trying to passthrough a EVGA GTX 1660 to a Windows 10 VM and I'm getting the all too popular Code 43 error. I did manage to do a rom dump on the GPU and it is NOT capable of booting in EFI (Type: 0).

Here's the funny thing though, I actually DID get passthrough to work a few days ago and suddenly no longer can. So I do know the card will work, it's just not working now all of a sudden. I've been rolling back to basic installs before driver installs, changing arguments, etc. and still the driver is detecting that I'm using a hypervisor ( throwing paint at a wall here but, does NVIDIA store anything on the card maybe to detect this?).

Windows is also saying it detects a hypervisor in System Information (if that means anything).

So I guess with that, should start posting hardware details and current configs:

Hardware:
CPU: Threadripper 3960x
Mobo: Asrock TRX40 Creator


args: -machine type=q35,kernel_irqchip=on
balloon: 8000
bootdisk: sata0
cores: 8
cpu: host,hidden=1
hostpci0: 01:00,pcie=1,x-vga=1
ide2: none,media=cdrom
machine: q35
memory: 16000
name: KJC-GFXHD
net0: e1000=[MAC ADDRESS],bridge=vmbr0,firewall=1
numa: 0
ostype: win10
parent: default
sata0: VM-Storage:vm-100-disk-0,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=47f80e3d-4d28-40a9-91db-45fe8c27ce19
sockets: 1
vmgenid: 3592f311-3bb0-488c-9742-922b93d35536

pversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

lspci -k

01:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660] (rev a1)
Subsystem: eVga.com. Corp. TU116 [GeForce GTX 1660]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
01:00.1 Audio device: NVIDIA Corporation Device 1aeb (rev a1)
Subsystem: eVga.com. Corp. Device 1163
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
01:00.2 USB controller: NVIDIA Corporation Device 1aec (rev a1)
Subsystem: eVga.com. Corp. Device 1163
Kernel driver in use: vfio-pci
01:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1aed (rev a1)
Subsystem: eVga.com. Corp. Device 1163
Kernel driver in use: vfio-pci
Kernel modules: i2c_nvidia_gpu

/etc/modprobe.d/blacklist.conf

blacklist radeon
blacklist nouveau
blacklist nvidia
blacklist i2c-nvidia-gpu

/etc/modprobe.d/iommu_unsafe_interrupts.conf
(Though if I understand correctly, my processor shouldn't need this?)
options vfio_iommu_type1 allow_unsafe_interrupts=1

/etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:2184,10de:1aeb,10de:1aec,10de:1aed

I think that's everything.

I'm honestly not sure what changed for the driver to throw 43. The only thing I did prior to all this was try to install the VirtIO drivers for my NIC to get away from e1000. But since I know passthrough is able to work, I'm determined to get this working again.

Please help and thanks!
 
Last edited:
Alright so a couple of things I've learned.

1) I'm not sure what the ROM Dump I did was talking about, but when I researched the card over at https://www.techpowerup.com/, I found out that the GTX 1660 actually IS capable of UEFI.

2) Had a hypothesis, since I had a spare GPU in another position, I tried making a VM with that card aka a GT 730 which I was going to put in another VM anyway. This card with the basic settings of only having cpu: host,hidden=1 was enough to get the drivers installed properly! Just to be sure everything was fine and VirtIO wasn't causing any problems, I then installed those drivers and rebooted a couple of times, flawless.

3) So I tried the GTX 1660 again on two VMs. One with OVMF and another with SeaBIOS. BOTH versions failed where before all this happened, I was at least able to get the 1660 working with SeaBIOS. So I had another hypothesis, card order might be the problem. Since this motherboard has two X16 slots I can utilize, it was no skin to move the 1660 over to the third PCIe slot and the GT 730 over to PCIe one.

4) Results, The 1660 WORKS and the 730 faulted with Code 43 on a previous VM where it actually was working. So I'm guessing that the MB needs a GPU in slot one for stuff.

What I'm curious about is why the old position of the cards worked before? My guess is that something maybe delayed it or something and when I rebooted, it fixed the delay where it's now looked at as the primary? I don't know. What matters most is that there was nothing inherently wrong with my config. As a note, I did remove all the vfio.conf and interrupt stuff as I didn't need them apparently. I'll keep monitoring everything to make sure those VM's stay up this time. If it flips out to 43 again, guess I'll be back.
 
What I'm curious about is why the old position of the cards worked before?
did you do a kernel upgrade to the 5.4 kernel?
if yes, we built the kernel with vfio built in, so the options set in modprobe.d did not get set, so maybe the host initializing just prevented the guest driver from loading correctly (as the windows driver expects the card not to be used before)
but we'll build future kernel versions with vfio-pci as module again (i think with 5.4.40) , so that should solve itself in the next update (if it was indeed this)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!