Code 43 on an 650 Ti, even with every option set right

fridtjof

New Member
Oct 8, 2016
6
0
1
20
I hope this is not getting annoying for everyone here, but I'm also having issues with GPU Passthrough.

First off, what I'm running:
Hardware:
i7-6700k
Asus Z170-Pro
Asus 650 Ti​

Software:
Proxmox 4.3-1/e7cdc165
Guest:
Windows 10 Enterprise 1607
Nvidia Driver 372.54​

vmid.conf:
Code:
bios: ovmf
boot: dcn
bootdisk: virtio0
cores: 4
cpu: host,hidden=1
efidisk0: vm:103/vm-103-disk-2.qcow2,size=128K
hostpci0: 01:00,pcie=1,x-vga=on
hostpci1: 07:00,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 8192
name: workstation
net0: virtio=AE:23:11:59:1D:D4,bridge=vmbr0
net1: virtio=56:5B:C0:B9:10:8F,bridge=vmbr1
numa: 0
ostype: win8
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=a24966ab-e72b-403a-a0b4-c7182faa1fff
sockets: 1
tablet: 0
usb0: host=1-9
usb1: host=1-10
usb2: host=1-13
usb3: host=1-14
virtio0: vm:103/vm-103-disk-1.qcow2,size=250G
(07:00 is a USB controller)

The actual passthrough works fine, because when I booted the same VM using an Arch Linux ISO, the attached display came to life and displayed its console.

However on Windows, even with cpu: host,hidden=1 (which adds kvm=off) and x-vga=on (which adds hv_vendor=proxmox) I still get a Code 43 in Windows. I also tried editing QemuServer.pm so it sets hv_vendor to something else than "proxmox" in case the driver started looking for that string.

I don't really know what's left that I could try, so any help is appreciated. Thanks in advance
 

devilrunner

New Member
Aug 3, 2015
9
0
1
I don't know if you were able to solve your problem?

I am also having problems passing through my Nvidia GPU; AMD works fine.

Virtual Environment 4.3-14/3a8c61c7
------------------------------------------------
Code:
proxmox-ve: 4.3-75 (running kernel: 4.4.35-1-pve)
pve-manager: 4.3-14 (running version: 4.3-14/3a8c61c7)
pve-kernel-4.4.35-1-pve: 4.4.35-75
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.24-1-pve: 4.4.24-72
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-19
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-87
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
VMID.conf
-------------
Code:
agent: 1
bios: ovmf
boot: c
bootdisk: virtio0
cores: 4
cpu: host
efidisk0: ZFS_Pool_RAID0-SSDs:vm-101-disk-2,size=128K
hostpci0: 01:00.0,x-vga=on
hostpci1: 01:00.1
ide0: none,media=cdrom
keyboard: fr-be
memory: 4096
name: Win10-B
net0: virtio=42:E7:16:A1:9F:19,bridge=vmbr0
numa: 0
ostype: other
scsihw: virtio-scsi-pci
smbios1: uuid=476da18b-a08f-4303-a463-3a07e7347d0c
sockets: 1
vga: qxl
virtio0: ZFS_Pool_RAID0-SSDs:vm-101-disk-1,cache=writeback,iothread=1,size=64G
I have tried a milion things but keep getting Code 43 on my NVidia card.
 
Last edited:

devilrunner

New Member
Aug 3, 2015
9
0
1
Maybe some smart people from the excellent Proxmox staff can point in the right direction for Nvidia passthrough on windows guest VM?

This is the command that gets generated:

Code:
/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=476da18b-a08f-4303-a463-3a07e7347d0c' -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/kvm/OVMF_CODE-pure-efi.fd' -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/zvol/ZFS_Pool_RAID0-SSDs/vm-101-disk-2' -name Win10-B -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,kvm=off' -m 4096 -k fr-be -object 'iothread,id=iothread-virtio0' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'vfio-pci,host=01:00.0,id=hostpci0,bus=pci.0,addr=0x10' -device 'vfio-pci,host=01:00.1,id=hostpci1,bus=pci.0,addr=0x11' -chardev 'socket,path=/var/run/qemu-server/101.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -spice 'tls-port=61002,addr=localhost,tls-ciphers=DES-CBC3-SHA,seamless-migration=on' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:402b5ab46253' -drive 'if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0' -drive 'file=/dev/zvol/ZFS_Pool_RAID0-SSDs/vm-101-disk-1,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=42:E7:16:A1:9F:19,netdev=net0,bus=pci.0,addr=0x12,id=net0'
 

fridtjof

New Member
Oct 8, 2016
6
0
1
20
The problem I had with my 650 Ti was that it still had a BIOS-only ROM and would not work with OVMF properly.

The 750 Ti however supports UEFI.

If, on boot, your screen connected to the GPU shows a big Proxmox logo, everything should work correctly.
 

devilrunner

New Member
Aug 3, 2015
9
0
1
GPU gets initialized properly and the sreens works.
However when I install Nvidia drivers the device gets code 43 and won't be fully oprational.
This seems to be a known 'bug' in Nvidia drivers that detect hypervisors KVM and hyperv flags and disable the device.

I used to be able to go arround this; hiding kvm flags etc but this apears to be not working anymore for some reason.
Anybody else experience with this?
 

fridtjof

New Member
Oct 8, 2016
6
0
1
20
I think you're missing the hidden=1 flag, like this:
cpu: host,hidden=1

Running latest proxmox, this also circumvents HyperV detection by setting the HyperV vendor to "proxmox" (the driver checks this field with a blacklist which does not include "proxmox" or any other bogus vendors you could set)

also set the machine type to q35:
machine: q35
 

fridtjof

New Member
Oct 8, 2016
6
0
1
20
You may also want to set the pcie=1 flag on both the GPU and its audio device. I don't know what kind of difference this would make but I'd say it might also increase performance a lot
 

devilrunner

New Member
Aug 3, 2015
9
0
1
Thx fridtjof..I will try those settings tomorrow.

I haven't got very good experience with machine: q35 type and pcie in the past with my GPU cards.

That's why I am used to work with i440x model type and regular PCI passthrough.
Maybe these work better now that qemu has come allong...?
 

mcflym

Member
Jul 10, 2013
176
8
18
Which kind of VM do you use? I had similar problems with Windows 10. Windows 8.1 worked perfectly in the end.

I don't know why but this is the deal for me now...
 

fridtjof

New Member
Oct 8, 2016
6
0
1
20
I use Windows 10. OS type is set to "win8" which also works fine for Windows 10 (devilrunner, I recommend you also set this - it may enable some specific HyperV accellerations)

One small tip for when it's set up: don't ever pass through host usb ports. Only pass through entire USB controllers.
The reason for this is that the emulated USB port inside the guest VM is not 100% perfect, which generated loads of DPC latency for me.
This lagged the whole VM including the mouse pointer and messed with audio output (crackling, stutter, skips)

Regarding q35 and pcie=1 - I have not experienced any problems with these options so far, and I believe (haven't compared though) it gives you more performance than just plain PCI (also because the GPU is PCIe anyway)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!