[SOLVED] GPU Passthrough with Nvidia GT 1030

Oct 15, 2019
8
0
6
39
Hey folks! I've followed the GPU Passthrough guide multiple times through now in an attempt to get Passthrough working on an Nvidia GT 1030. The Windows 10 VM I am attempting to do this on ends up resulting in Code 43. Sees the card, drivers install, but no dice.

My VM config:

Code:
agent: 1
bios: ovmf
boot: cdn
bootdisk: scsi0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: cephfs_rbd:vm-106-disk-1,size=128K
hostpci0: 08:00,pcie=1,romfile=GP108.rom,x-vga=1
ide2: cephfs_installers:iso/virtio-win-0.1.171_1_.iso,media=cdrom,size=363020K
machine: pc-q35-3.1
memory: 8192
name: Graphics-Test
net0: virtio=9E:DC:DD:A6:9A:CA,bridge=vmbr0,firewall=1
numa: 1
ostype: l26
scsi0: cephfs_rbd:vm-106-disk-0,size=200G
scsihw: virtio-scsi-pci
smbios1: uuid=c498519f-16c7-48a2-98b4-7de667a7d215
sockets: 1
vga: none
vmgenid: 93e89e94-5e20-4786-8080-f6acd45acedf

GRUB configuration, IOMMU enabled, some modifications I've tried with no luck:

Code:
 GRUB_CMDLINE_LINUX_DEFAULT=“quiet intel_iommu=on vfio_iommu_type1.allow_unsafe_$

I've also changed the machine setting from Q35 to Q35-3.1 per a post I saw from another member. Made several minor adjustments in an attempt to get this working with no luck really. Grabbed the ROM file off the card via GPU-z, added to the KVM and altered the VM Config accordingly. I feel like I'm banging my head against a wall and/or missing something really obvious here. Any help is greatly appreciated!
 
ostype: l26
if you use windows, please change the ostype to the correct windows version, this enables some things for windows e.g. hyper-v enlightenments and sets the hv-vendor-id in case of gpu passthrough etc.
 
do you mean somthing like:
ostype : win10
yes

i also run my vm with
macine: q35

Could this be the issue ?
probably not, note that qemu 4 has a bad default with q35, best use pc-q35-3.1 for now (until we ship qemu >= 4.0.1 or 4.1)
 
@dcsapak Thank you for your suggestion. I made the change on ostype to win10 (as you mentioned) and rebooted the VM. I'm still getting Code 43, however, on the card. Is it necessary to reboot the node as well?

@Veeh Yeah I had better luck getting this to "work" once I changed to Q35-3.1 due to the bad default mentioned by dcsapak earlier but it wasn't long before Code 43 showed back up.

At this point, I've tried so many different things that I'm tempted to just try another video card.
 
@dcsapak: thanks :)

@admin-rack.management : I had a hard time setting up my gpupassthough on my vm at first. My issue was that my motherboard is crossfire enable and both first and 2 16x pci-e slot are using the same iommu group.
I had to plug the 2nd card on the third pci-e slot and the issue was resolved.

did you check if your card is the only thing present in 08:00 iommu group ?
 
@dcsapak Is there a "recommended" GPU for this sort of process or something that you know, confirmed, is working? Been working for days on this issue as we're trying to get this working, add cards in all six of our nodes so we can have failover/high availability for clients. Thanks for your help with this, same to you @Veeh
 
Is it necessary to reboot the node as well?
no but sometimes when i tested, as soon as windows/nvidia driver (i am not sure which it was) detected that it was running in a vm, it never let me use the device without error 43
(a reinstall of the vm did help though)
 
@dcsapak This was a newly created VM that the issue was cropping up on that I've then made modifications to config, etc to. Or are we talking reinstalling Windows only within the virtual machine?
 
Installed a new Windows 10 VM to test with and the same issue persists. If I enable q35-3.1 and set kernel_irqchip=on within -machine it stops displaying the error temporarily but then shows that there are no drivers available for the device. I'm installing the driver set again to see if it fixes the issue but, knowing how it went last time, it will likely display Code 43 again.
 
Or are we talking reinstalling Windows only within the virtual machine?
yes i was talking about windows inside the vm. it seems that the nvidia driver or windows saved the info that it was a vm...

Installed a new Windows 10 VM to test with and the same issue persists. If I enable q35-3.1 and set kernel_irqchip=on within -machine it stops displaying the error temporarily but then shows that there are no drivers available for the device. I'm installing the driver set again to see if it fixes the issue but, knowing how it went last time, it will likely display Code 43 again.
can you post the current config of the vm again? (just to check)
 
I pasted a sceenshot of my vm conf. As you can see their is a 2nd card with code 43 error.
I did not put x-vga=1 in the vmid.conf.
I remenber having a had time with code43. At first my CG was with code43.
I somehow solved the code43 issue by not using x-vga=1. I had this microsoft display adaptor with code43. But you can disabled it and I was able to install amd driver.
Have you try starting your vm without x-vga=1, see how it turn out ?

EDIT: screenshot441 without x-vga, screenshot4 with x-vga.
 

Attachments

  • screenshot.441.jpg
    screenshot.441.jpg
    96.8 KB · Views: 68
  • screenshot.4.jpg
    screenshot.4.jpg
    109.3 KB · Views: 65
Last edited:
On a whim, I purchased an AMD card (R7 240) and tried that. I ran into zero of these issues. q35-3.1was not required nor were some of the other adjustments I made prior.

Configuration of VM (with AMD GPU):

Code:
agent: 1
bios: ovmf
boot: dc
bootdisk: virtio0
cores: 4
efidisk0: cephfs_rbd:vm-108-disk-1,size=128K
ide2: cephfs_installers:iso/virtio-win-0.1.171_1_.iso,media=cdrom,size=363020K
machine: q35
memory: 8192
name: AMDgraphics
net0: virtio=BA:83:BA:BE:A3:CA,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=7133387d-d0a7-416e-9756-e41e44fd41d9
sockets: 1
virtio0: cephfs_rbd:vm-108-disk-0,size=200G
vmgenid: cba48200-a010-4ba1-8e79-f723b1a8859a

No issues at all. Just working on optimization of the VM at this point.
 
So your initial issue would be related to your graphic card. or the fact that's it's nvidia.
I've a 1080 on my main, i'll try it out with this card and report back my result. This should be interesting.
btw, you don't have the hostpci line in your conf. the GC work anyway ?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!