Need help with VPE 4.1.1 GeGorce GPUs passthrough

fxdaemon

New Member
Mar 8, 2016
24
2
3
33
Hi guys,
Very new to Proxmox and got VPE 4.1.1 up and running over the weekend after extensive research on the
topic of GPU passthrough and found 1 of the 2 Nvidia GeForce GPU (GT610) I have is reported working
according to this wiki:
https://pve.proxmox.com/wiki/Pci_pa...our_PCI_card_address.2C_and_configure_your_VM

"MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working."

Make the long story short:

H/W:
CPU: i7-5820k
RAM: 32GB
GPU1: Nvidia GeForce GT 610
06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [GeForce GT 610] [10de:104a] (rev a1)
GPU2: Nvidia GeForce GTX980
07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 980] [10de:13c0] (rev a1)

OS & S/W:
PVE Manager: 4.1.1/2f9650d4
Kernel: 4.2.6-1-pve #1 SMP Wed Dec 9 10:49:55 CET 2015 x86_64 GNU/Linux
Guest VM: Windows 10
Guest VM <vmid>.cfg:
bootdisk: virtio1
cores: 4
ide1: local:iso/virtio-win-0.1.102.iso,media=cdrom,size=156988K
ide2: cdrom,media=cdrom
memory: 8192
name: Windows10
net0: e1000=66:31:36:30:36:34,bridge=vmbr0
numa: 1
ostype: win8
smbios1: uuid=e1bf4fb7-7e83-492e-92c0-5fc3e6787f27
sockets: 1
virtio0: local:100/vm-100-disk-1.raw,cache=writethrough,size=32G
hostpci0: 06:00.0,x-vga=on

Note: The BIOS is left as default SeaBIOS as I started off installing Win 10 with that setting and found that if
I change it to UEFI, it can't find the guest OS to boot off the bootdisk.

Relevant configuration changes:
/etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pci-stub.ids=10de:104a"​

/etc/modprobe.d/vfio.conf:
options vfio-pci ids=10de:104a,10de:0e08 disable_vga=1
# options vfio-pci ids=10de:13c0,10de:0fbb disable_vga=1

/etc/modprobe.d/iommu_unsafe_interrupts.conf:
options vfio_iommu_type1 allow_unsafe_interrupts=1

/etc/modules:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Pass-through methods attempted but failed:
GPU Seabios PCI PASSTHROUGH
hostpci0: 06:00,x-vga=on

Running as unit 100.scope.
kvm: -device vfio-pci,host=06:00.0,id=hostpci0,bus=pci.0,addr=0x10,x-vga=on: vfio: Device does not support requested feature x-vga
kvm: -device vfio-pci,host=06:00.0,id=hostpci0,bus=pci.0,addr=0x10,x-vga=on: Device initialization failed

GPU Seabios PCI EXPRESS PASSTHROUGH
machine: q35
hostpci0: 06:00,pcie=1,x-vga=on

Use of uninitialized value $kvmver in pattern match (m//) at /usr/share/perl5/PVE/QemuServer.pm line 6378.
Use of uninitialized value $current_major in numeric ge (>=) at /usr/share/perl5/PVE/QemuServer.pm line 6384.
Running as unit 100.scope.
kvm: -device vfio-pci,host=06:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: vfio: Device does not support requested feature x-vga
kvm: -device vfio-pci,host=06:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: Device initialization failed

Any thing I may have missed or any more info that may help me get past this is greatly appreciated.

Also, I've actually moved from Ubuntu server 14.04 LTS after succeeding in passing through both GPU
one at a time to Windows 10. However, I've run into error 12 (GT 610) and 43 (GTX 980, even get this with kvm=off) and not able to make any further progress and therefore trying out Proxmox. So, I've done
quite a fair bit of researching and trying different options before.


Thanks in advance.

FXD
 
Note: The BIOS is left as default SeaBIOS as I started off installing Win 10 with that setting and found that if
I change it to UEFI, it can't find the guest OS to boot off the bootdisk.

yes, you need to install windows with uefi enable before install. before windows don't install uefi partition without it.

Also, I've actually moved from Ubuntu server 14.04 LTS after succeeding in passing through both GPU
one at a time to Windows 10. However, I've run into error 12 (GT 610) and 43 (GTX 980, even get this with kvm=off) and not able to make any further progress and therefore trying out Proxmox. So, I've done
quite a fair bit of researching and trying different options before.

We have fixed the 43 code error with last proxmox updates (confirmed by 2 proxmox users).
 
Thank you very much Spirit. It seems what I've missed is exactly the pve-no-subscription repository update.
As soon as I've done that, I am getting GT610 to pass through and after updating with Windows 10 GT 610 driver
it comes up recognised as NVIDIA GeForce GT610!!!

THanks so much and feeling embarrassed thinking I have done enough research but missing that basic step.

WIll try GTX980 next and report back!!!

Great work Proxmox!!!
 
Ok, with GTX980 I am getting a black screen overwriting the original/default Proxmox login prompt.
I think it may be being passed through unsuccessfully as I don't even see the BIOS screen.
BTW, I've done a brand new Win10 installation starting with UEFI BIOS so that's no longer the issue.

Checking dmesg shows following:
[ 0.000000] Console: colour VGA+ 80x25
[ 0.232774] vgaarb: device added: PCI:0000:06:00.0,decodes=io+mem,owns=none,locks=none
[ 0.232774] vgaarb: setting as boot device: PCI:0000:07:00.0
[ 0.232774] vgaarb: device added: PCI:0000:07:00.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.232774] vgaarb: loaded
[ 0.232774] vgaarb: bridge control possible 0000:07:00.0
[ 0.232774] vgaarb: bridge control possible 0000:06:00.0
[ 2.130655] vgaarb: device changed decodes: PCI:0000:07:00.0,olddecodes=io+mem,decodes=none:eek:wns=io+mem
[ 2.207774] snd_hda_intel 0000:06:00.1: Handle VGA-switcheroo audio client

Is it likely that because 07:00.0 is being used by Proxmox itself as boot device as red line above and although I changed relevant configs in:
<vmid>.conf
grub
vfio.conf

and rebooted it's still being used by host?
If so, is there a way to instruct vgaarb not to use it as default boot device *without* moving the PCI slot with the GT610 ?
I am only guessing that vgaarb just finds the last available PCI or PCIE GPU and uses it?

Thanks again.

Rgds,
FXD
 
mmm, I really don' t known here.
I known they are a vga arbiter patch that exist and not applied to proxmox kernel currently.
But I'm not sure it's related, because card is "unlink" from host when vm is starting.

Could be great to test:

- GT610 standalone
- GTX980 standalone
- GT610 + GT980 with switching pci slots
 
and rebooted it's still being used by host?
If so, is there a way to instruct vgaarb not to use it as default boot device *without* moving the PCI slot with the GT610 ?
I am only guessing that vgaarb just finds the last available PCI or PCIE GPU and uses it?

Just to confirm, did you go into your hosts UEFI and changed the primary graphics to the Intel iGPU? Because if not, that might be (or probably is) the reason why the host is still using the first PCIe slots GPU.

Could you also post the output of "lspci -nnk" and "find /sys/kernel/iommu_groups/ -type l"? Preferrably after you confirmed that your motherboard is using the iGPU as primary display device.
 
Last edited:
Just to confirm, did you go into your hosts UEFI and changed the primary graphics to the Intel iGPU? Because if not, that might be (or probably is) the reason why the host is still using the first PCIe slots GPU.

Could you also post the output of "lspci -nnk" and "find /sys/kernel/iommu_groups/ -type l"? Preferrably after you confirmed that your motherboard is using the iGPU as primary display device.

Hi Shawly,
Thanks for the suggestion. I will double check but my MSI S99A Plus mobo BIOS doesn't seem to have the that option for me to select which GPU to use as boot-up GPU. That's most likely because the mobo doesn't have on-board integrated GPU to start with, and I should have
be more clear with that in my first post. So I am running with 2 PCI/e GPUs only in the system.

The IOMMU grouping for the 2 GPUs are:
[ 0.528575] iommu: Adding device 0000:06:00.0 to group 32
[ 0.528597] iommu: Adding device 0000:06:00.1 to group 32
[ 0.528630] iommu: Adding device 0000:07:00.0 to group 33
[ 0.528655] iommu: Adding device 0000:07:00.1 to group 33

And I am passing both the GPU and HD Audio on same card. I can definitely get audio for GT610 on 06:00.

Also, I have yet to try out Spirit's suggestion with diff GPU config. Unfortunately my GTX980 is too long to be relocated to another unused PCIE slot without hitting other components or PSU in the case. So looks like I am stuck with GTX98 on 07:00. I can only test the system with 1
GPU at a time. If I can re-create black screen with only GT610 , that will at least confirm the theory of host not able to release to VM.

BTW, I managed to mess up a successful GT610 pass-through after doing a Windows 10 auto update ... with GT610 driver updated to the
latest but not greatest ending up with black screen as of last night. Damn ....

Rgds,
FXD
 
Last edited:
BTW, I managed to mess up a successful GT610 pass-through after doing a Windows 10 auto update ... with GT610 driver updated to the
latest but not greatest ending up with black screen as of last night. Damn ....

Do you have tried to reboot host completly ?

Sometimes gpu card reset is not working properly, and host hard reboot is the only solution
 
Do you have tried to reboot host completly ?

Sometimes gpu card reset is not working properly, and host hard reboot is the only solution
Yup did that few times already. So I am being distracted to focus on downloading all older NVIDIA drivers and trying one at a time until I find the
working one :-( If/When I get the working one I will again share that back on this forum.
 
Yup did that few times already. So I am being distracted to focus on downloading all older NVIDIA drivers and trying one at a time until I find the
working one :-( If/When I get the working one I will again share that back on this forum.

Happy to report back that NVIDIA Graphics Driver 353.62 for GT610 manages to bring it back, recovering from boot up black screen ... Yay!
Now I can get back to part 2 of original attempt to get GTX980 to pass through.
 
Oh so you can't use the iGPU, then it really sounds like the problem that is answered in question 4 here: http://vfio.blogspot.de/2014/08/vfiovga-faq.html

You probably have to recompile your kernel with this patch: https://lkml.org/lkml/2014/5/25/94
I'm not sure so I can't guarantee success, but it's worth a try, because recompiling the Proxmox kernel is a matter of minutes.

Or buy another mainboard with support for iGPUs. :)

I'm interested to known if the patch is needed or not.
If it's needed, I could add it officialy in proxmox kernel.
 
I'm interested to known if the patch is needed or not.
If it's needed, I could add it officialy in proxmox kernel.

AFAIK this patch is only needed when you don't have an iGPU but want to pass through your host graphics (the 980 in this case), like the other arbiter patch is only needed when you want to pass through your iGPU to a guest. There are also no kernel command line options to enable or disable this patch, so IMHO I wouldn't include it in the kernel.
Maybe you could provide another kernel for users that need this patch, but without further testing to confirm, that it doesn't change the way the vga arbiter handles things I wouldn't include it, I would tell you if I could, but I'm just a Java developer so I have no idea what this patch actually does.
 
Last edited:
Oh so you can't use the iGPU, then it really sounds like the problem that is answered in question 4 here: http://vfio.blogspot.de/2014/08/vfiovga-faq.html

You probably have to recompile your kernel with this patch: https://lkml.org/lkml/2014/5/25/94
I'm not sure so I can't guarantee success, but it's worth a try, because recompiling the Proxmox kernel is a matter of minutes.

Or buy another mainboard with support for iGPUs. :)

Thanks man, let me give that patch a try and report back the outcome.

Cheers
FXD
 
Hi guys,

Sorry for hijacking the thread, but I am in a similar situation to fxdaemon. I also have two nVidia graphicscards; GT720, GTX480. I have been able to passthrough the 480 when having the 720 installed in the first slot at the same time. In the same way that fxdaemon describes, it doesn't work with only the 480 installed. I am using the "GPU SeaBios PCI-passthrough" configuration and I have the latest version from the pve-no-subscription repository. I have tried all the other configurations described on your wiki-page and this is the only one working for me.

I would also like to passthrough a dedicated pci-soundcard to my VM. My motherboard (supermicro X10DAi + 2x Xeon E5-2620v3) does only support 2 pci-slots for the first CPU so I have to move both the GPU and the soundcard to the pci-slots for the second CPU. When I do this it doesn't work anymore, the screen is black (I have of course changed the PCI-slot address; in this case 03:00 -> 80:00). There are no error-messages and the web-interface says that the VM is running. I have tried all the other configurations that is described on the wiki-page too. I have created a new VM and reinstalled windows 10 without any luck.

Is there any extra configuration that is needed when trying to passthrough a PCI-slots from the second CPU?

@fxdaemon; Have you tried the kernel patch yet? If so, did it work? I am to scared to try it out myself at the moment ^^
 
I've tried to switch pci-slots for the GPU:s and the sound-card and now I have gotten nearly everything to work. I have the 720 in the first pci-e slot, the 480 in the next one (both for cpu0) and the soundcard in a pci-slot for cpu1. Now I can boot into my VM and the soundcard shows and can be installed, but I can't get any audio out of it. I don't know if that is because of qemu or if it's because of the hardware or drivers. I just tried it out in another computer with w10 installed and it works without any problems.
 
Hi guys,

Sorry for hijacking the thread, but I am in a similar situation to fxdaemon. I also have two nVidia graphicscards; GT720, GTX480. I have been able to passthrough the 480 when having the 720 installed in the first slot at the same time. In the same way that fxdaemon describes, it doesn't work with only the 480 installed. I am using the "GPU SeaBios PCI-passthrough" configuration and I have the latest version from the pve-no-subscription repository. I have tried all the other configurations described on your wiki-page and this is the only one working for me.

I would also like to passthrough a dedicated pci-soundcard to my VM. My motherboard (supermicro X10DAi + 2x Xeon E5-2620v3) does only support 2 pci-slots for the first CPU so I have to move both the GPU and the soundcard to the pci-slots for the second CPU. When I do this it doesn't work anymore, the screen is black (I have of course changed the PCI-slot address; in this case 03:00 -> 80:00). There are no error-messages and the web-interface says that the VM is running. I have tried all the other configurations that is described on the wiki-page too. I have created a new VM and reinstalled windows 10 without any luck.

Is there any extra configuration that is needed when trying to passthrough a PCI-slots from the second CPU?

@fxdaemon; Have you tried the kernel patch yet? If so, did it work? I am to scared to try it out myself at the moment ^^
No unfortunately I ran into some error running "make" but haven't got the time to investigate that further.
However, I did take a look at the vgaarb patch but again really unsure what it's meant to be fixing and as there is no configurable option or boot
switch that can be specified, I am wondering even if I can get it all compiled successfully it can actually allows me to choose/specify the GPU as
the boot up host GPU. So I am taking a breather from this patch for now.
 
I've tried to switch pci-slots for the GPU:s and the sound-card and now I have gotten nearly everything to work. I have the 720 in the first pci-e slot, the 480 in the next one (both for cpu0) and the soundcard in a pci-slot for cpu1. Now I can boot into my VM and the soundcard shows and can be installed, but I can't get any audio out of it. I don't know if that is because of qemu or if it's because of the hardware or drivers. I just tried it out in another computer with w10 installed and it works without any problems.

HI Scomber, I read your post as by moving GPUs around different slots, you manage to get the intended GPU to be passed through, is that
correct?

Thanks,
FXD
 
HI Scomber, I read your post as by moving GPUs around different slots, you manage to get the intended GPU to be passed through, is that
correct?

Thanks,
FXD
Yes! As of now I have a working configuration. I have the GT720 in pci-e slot 0 (CPU0) and the GTX480 in pci-e slot 1 (CPU0) which I have passed through to the VM. However, there seemed to be a problem with the dedicated soundcard (Asus Xonar DX) when running it through passthrough. It shows up correctly in Windows 10 and I am able to install the drivers, but it delivers no sound (works in another computer with W10 I have). Thankfully my motherboard have onboard sound, so I have passthrough that instead, and now everything works to my satisfaction. The VM is intended to act as an HTPC.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!