Code 43 NVIDIA driver error within Windows 10 VM, Geforce 750 Ti Passthrough

Hipurnism

New Member
Jan 7, 2017
5
0
1
34
all info I have found on the internet, is to use to romfile option for this bug.
also, Maybe try to change pci slot ?
Made no difference. GT630 can only booted once, the Quadro can be booted everytime.

can you post your vm config file ?
Sure:
Code:
balloon: 0
bootdisk: virtio0
cores: 4
cpu: host
hostpci0: 04:00,pcie=1,x-vga=on,romfile=/tmp/gt630.rom
hotplug: disk,network,usb
ide2: none,media=cdrom
machine: q35
memory: 8192
name: w10-test01
net0: virtio=0A:C9:E0:EE:94:56,bridge=vmbr0
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=b9c0680a-6068-4bba-bf9e-571d8f76cd3a
sockets: 1
usb0: host=1-1.3
usb1: host=3-4
virtio0: local-lvm:vm-200-disk-1,size=32G
I've tried everything inside the VM I've found online: turn balloning off, fixed size memory allocation... The virtio0 disk is in raw format with no-cache.
I'll rebuild my VMs as I have another W10 one from the same template and inside it the I/O is around 75-90% and the system feels snappier. It still very high for a fresh system, but better.

Update:
With sata the disk usage is 0-25%, constantly 0 on an idle system. With virtio its 100%. WTF? Any idea? Does anybody else experiences the same?

I have found some post saying to pass to grub kernel options:
Code:
"video=efifb:off"
Man, this did the trick!
This kernel parameter + romfile is the magic combo!
Maybe it'd be best if you'd update the wiki with this info. It might save some souls that want to use a headless host with graphical VM.
Note: with this option the host loses the screen as soon as the kernel loads.
 
Last edited:

Dorin

Member
Sep 11, 2017
33
2
8
30
I followed the thread from: https://forum.proxmox.com/threads/gpu-passthrough-tutorial-reference.34303/ and i ended with a Win7 vm which is unable to complete the boot process (https://forum.proxmox.com/threads/gpu-passthrough-tutorial-reference.34303/#post-181095).

Now i have a Win10 os guest vm with the error code 43 reported in device status. If i try to install the video card's driver (GTX 1050 Ti) i will get an error ("This NVIDIA graphics driver is not compatible with this version of Windows." ...) then the installation process is stopped.

Reading this thread i saw some existing solutions for this issue (of error "Code 43"), but i'm a little bit confused and i'm not sure with what should i start..

The current config is:
Code:
bios: ovmf
bootdisk: virtio0
cores: 4
cpu: host
efidisk0: local-zfs:vm-101-disk-3,size=128K
ide0: none,media=cdrom
machine: q35
memory: 8192
name: Win10x64
net0: e1000=E2:F8:A4:AC:1A:32,bridge=vmbr0
numa: 0
ostype: win10
scsi0: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=35da8b04-26a6-4bf4-8f85-a142eaf5eabd
sockets: 1
virtio0: local-zfs:vm-101-disk-1,cache=none,size=250G
virtio1: local-zfs:vm-101-disk-2,cache=none,size=250G
#hostpci0: 01:00,x-vga=on,pcie=1,romfile=vbios.bin
hostpci0: 01:00,x-vga=on,pcie=1

pveversion -v:
Code:
proxmox-ve: 5.0-19 (running kernel: 4.10.17-2-pve)
pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc)
pve-kernel-4.10.17-2-pve: 4.10.17-19
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-3
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90

The result from https://forum.proxmox.com/threads/code-43-nvidia-driver-error-within-windows-10-vm-geforce-750-ti-passthrough.23746/page-2#post-131446 is due to the applied patch from https://forum.proxmox.com/threads/code-43-nvidia-driver-error-within-windows-10-vm-geforce-750-ti-passthrough.23746/page-2#post-131464 ?
Does this patch is still valid for PVE 5? In some short words, how should be done?
 

Attachments

Last edited:

mcflym

Member
Jul 10, 2013
176
8
18
I am running a GTX 1050ti too and had issues regarding Code 43 in Windows 10 too.
Windows 8.1 worked for me only...

Just try it
 

Dorin

Member
Sep 11, 2017
33
2
8
30
I am running a GTX 1050ti too and had issues regarding Code 43 in Windows 10 too.
Windows 8.1 worked for me only...

Just try it
Do i need to follow any other additional instructions excepting of what i have already did for Win7 or Win10 vms?
How does perform the gpu in your guest?
Thank you for feedback.
 
Last edited:

Dorin

Member
Sep 11, 2017
33
2
8
30
I tried with Win 8.1 Pro x64 and i'm facing with the same issue, but this time i was able somehow to install the driver.
However, the final conclusion: error code 43.

This was the order of events:
-vm boot
-error code 43
-driver uninstalled from device manager (nvidia)
-driver installed (nvidia)
-vm restarted
-error code 43
 

Attachments

mcflym

Member
Jul 10, 2013
176
8
18
Performance is nearly like running a physical machine.

The only thing I can do for you is to post my config:

Code:
bios: ovmf
cores: 2
cpu: Haswell-noTSX
hostpci0: 01:00,x-vga=on
numa: 0
ostype: win8
Nothing more in the quest config that is relevant for...
 

Dorin

Member
Sep 11, 2017
33
2
8
30
Performance is nearly like running a physical machine.

The only thing I can do for you is to post my config:

Code:
bios: ovmf
cores: 2
cpu: Haswell-noTSX
hostpci0: 01:00,x-vga=on
numa: 0
ostype: win8
Nothing more in the quest config that is relevant for...
This is what i have in /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:eek:ff"

I added disable_vga=1 in /etc/modprobe.d/vfio.conf:
options vfio-pci ids=10de:1c82,10de:0fb9 disable_vga=1

VM's conf file:
hostpci0: 01:00,x-vga=on

By default, without hostpci0 parameters in vm's conf file, i noticed the Microsoft Basic Display Adapter from: PCI bus 0, device 1, function 0 (Location, from Device Manager) which seems to be related with the foolowing entry of IOMMU group 1: /sys/kernel/iommu_groups/1/devices/0000:00:01.0.
Finally, with hostpci0 parameter enabled, i ended again with error code 43 (and error code 32 before driver installation :)) ), but location info shown in device manager has changed for the GPU as shown in the previous trial (different bus, different device number): PCI bus 1, device 0, function 0, to: PCI bus 6, device 16, function 0.

I have a filling that somehow it should work because in Win7 vm the GPU was assigned, but didn't boot completely and freeze.
How are these files (/etc/modprobe.d/vfio.conf, /etc/default/grub) configured in your setup?

Thanks.


Additional info:
-Host system:
Xeon 1225 v3 CPU with onboard Intel graphic card.
The BIOS does not offer the possibility to select manually the primary display adapter.
I have an additional PCI-E 2 slot, but the GPU doesn't fit in that place. There is not enough room space in that location of the case.

-GPU in slot 1 of the system:
root@proxmox:~# lspci -n -s 01:00
01:00.0 0300: 10de:1c82 (rev a1)
01:00.1 0403: 10de:0fb9 (rev a1)

-Interrupts available (DMAR: ... ecap f010da):
root@proxmox:~# dmesg | grep ecap
[ 0.026121] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap d2008c20660462 ecap f010da
[ 5682.589511] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900

-IOMMU groups (three entries in group 1):
root@proxmox:/# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/7/devices/0000:00:1c.0
/sys/kernel/iommu_groups/5/devices/0000:00:1a.0
/sys/kernel/iommu_groups/3/devices/0000:00:16.3
/sys/kernel/iommu_groups/3/devices/0000:00:16.0
/sys/kernel/iommu_groups/11/devices/0000:03:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.1
/sys/kernel/iommu_groups/6/devices/0000:00:1b.0
/sys/kernel/iommu_groups/4/devices/0000:00:19.0
/sys/kernel/iommu_groups/2/devices/0000:00:14.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.3
/sys/kernel/iommu_groups/10/devices/0000:00:1f.2
/sys/kernel/iommu_groups/10/devices/0000:00:1f.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1d.0
 

Attachments

Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!