[SOLVED] Nvidia RTX 2060 / Ryzen 3700x Passthrough

js2

New Member
Apr 9, 2020
13
5
3
Hello Everyone,

First off, I appreciate your time! I've spent a few days tinkering/reading, but I have failed at VFIO at this point. I was able to get ESXi free version to let me passthrough my RTX 2060. I've went through this guide a few times: https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/

My Setup:
Gigabyte Aorus x570m ITX Motherboard
Ryzen 3700x
Zotac RTX 2060

To say my problem quickly, my virtual machine never starts when my GPU is attached. It just spins forever on "starting". I've tried in the web gui, and just with "qm start ID", through SSH. The terminal never returns any info either. I have to hard power off the host to get the VM to cancel the start action.

Some Background Info:
- When I run "lspci -vnn", GPU and Audio devices always show "vfio-pci". USB and Serial sometimes show "vfio-pci". I'm not sure why they flip back to xhci_hcd (serial), and nvidia-gpu (usb). I wonder if I need to pass through all 4 components?

- The same guest VM starts when I remove the GPU, and do nothing else.

- I modified /etc/default/grub. I then ran update-grub :
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction video=vesafb:eek:ff,efifb:eek:ff"

- /etc/modules contains:
vf, vfio_iommu_type1, vfio_pci, vfio_virqfd

- Ran these:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

- I added radeon, nouveau, nvidia to:
/etc/modprobe.d/blacklist.conf

- My GPU Vendor IDs:
09:00.0 0300: 10de:1f08 (rev a1)
09:00.1 0403: 10de:10f9 (rev a1)
09:00.2 0c03: 10de:1ada (rev a1)
09:00.3 0c80: 10de:1adb (rev a1)

- /etc/modprobe.d/vfio.conf contains this:
options vfio-pci ids=10de:1f08,10de:10f9 disable_vga=1

- I ran "update-initramfs -u", rebooted.

- Here is my VM configuration:
balloon: 0
bios: ovmf
bootdisk: scsi0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: Storage:100/vm-100-disk-1.qcow2,size=128K
hostpci0: 09:00,pcie=1
ide2: Storage:iso/SW_DVD9_Win_Pro_10_1909_64BIT_English_Pro_Ent_EDU_N_MLF_X22-17395.ISO,media=cdrom
machine: q35
memory: 8192
name: jsv02-windows
net0: virtio=1E:7E:EE:07:4D:B6,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
parent: NoGPU
sata0: Storage:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
scsi0: Storage:100/vm-100-disk-0.qcow2,backup=0,cache=writeback,iothread=1,replicate=0,size=64G,ssd=1
scsihw: virtio-scsi-pci

- Here are the versions of my setup. I downloaded the latest version of Proxmox. It's basically a new install.
pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-helper: 6.1-6
pve-kernel-5.3: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-13
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-21
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Last edited:
can you post the dmesg output?
also others have reported that "video=vesafb:off,efifb:off" is better written as "video=vesafb:of video=efifb:off" (for some that seemed to have worked)

- When I run "lspci -vnn", GPU and Audio devices always show "vfio-pci". USB and Serial sometimes show "vfio-pci". I'm not sure why they flip back to xhci_hcd (serial), and nvidia-gpu (usb). I wonder if I need to pass through all 4 components?
you can try to only pass through the first 2 functions
hostpci0: xx:yy.0;xx:yy.1,pcie=1,...
 
Hey Dominik,

Thanks for your reply! I switched my /etc/default/grub file with "video=vesafb:eek:f video=efifb:eek:ff", ran update-grub, and rebooted.

My VM has the same behavior, but I took a dmesg about 30 seconds after clicking start. About 5 minutes later, I closed my terminal, and couldn't get back into SSH. It apeared to be frozen. My web gui was still responding though. I noticed an error underneath the start task of my VM! I never saw this before. I posted it as a second txt file.

Edit: A couple things:
- My setup only has 1 GPU total. My plan is to run my server with no local console (web gui only). I am assuming that is okay?
- As I have an AMD Ryzen system, can I remove the "flags=+pcid"?
- I also wanted to clarify that I attached the VGA and Audio devices only during this test.
- I got an error in the event output of the webgui! I attached it as a txt file.

Edit 2:
To Test, I removed +pcid from the startup flags. This reduced the error length greatly. Now I see this:

"iothread is only valid with virtio disk or virtio-scsi-single controller, ignoring
kvm: -device vfio-pci,host=0000:09:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on: Failed to mmap 0000:09:00.0 BAR 3. Performance may be slow
kvm: -device vfio-pci,host=0000:09:00.0,id=hostpci1.0,bus=ich9-pcie-port-2,addr=0x0.0,multifunction=on: vfio 0000:09:00.0: device is already attached
TASK ERROR: start failed: QEMU exited with code 1"

Edit 3:
I'm not sure what these commands do, but they removed the "Failed to mmap" error I was receiving. Now I am down to just, "device is already attached". Here is where I found these commands: https://forums.unraid.net/topic/71371-resolved-primary-gpu-passthrough/

echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

For the sake of science, I purchased an AMD 5600 xt, to test with my RTX 2060. I will continue to work on the RTX 2060 for a while, but testing another gpu is now an option as well.
 

Attachments

Last edited:
I figured out why I was receiving the "kvm: -device vfio-pci,host=0000:09:00.0,id=hostpci1.0,bus=ich9-pcie-port-2,addr=0x0.0,multifunction=on: vfio 0000:09:00.0: device is already attached" error. It's funny how cryptic messages are usually pretty literal.

I added the gpu and audio device through the gui. When I looked at the config, 09:00 was listed twice! Not sure if I did that, or it's a bug. I removed the audio device, and now my VM starts up. However, now I get to deal with the "Code 43". I tried a few GRUB commands, but no luck yet. I was able to install the offical driver. I will update this when I figure out how to fix the 43 error.
 
I figured out why I was receiving the "kvm: -device vfio-pci,host=0000:09:00.0,id=hostpci1.0,bus=ich9-pcie-port-2,addr=0x0.0,multifunction=on: vfio 0000:09:00.0: device is already attached" error. It's funny how cryptic messages are usually pretty literal.
great :)

I will update this when I figure out how to fix the 43 error.
- Here is my VM configuration:
balloon: 0
bios: ovmf
bootdisk: scsi0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: Storage:100/vm-100-disk-1.qcow2,size=128K
hostpci0: 09:00,pcie=1
ide2: Storage:iso/SW_DVD9_Win_Pro_10_1909_64BIT_English_Pro_Ent_EDU_N_MLF_X22-17395.ISO,media=cdrom
machine: q35
memory: 8192
name: jsv02-windows
net0: virtio=1E:7E:EE:07:4D:B6,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
parent: NoGPU
sata0: Storage:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
scsi0: Storage:100/vm-100-disk-0.qcow2,backup=0,cache=writeback,iothread=1,replicate=0,size=64G,ssd=1
scsihw: virtio-scsi-pci
i guess you would have to either add 'x-vga=1' to your hostpci0 line or set hv-vendor-id on the cpu
 
great :)



i guess you would have to either add 'x-vga=1' to your hostpci0 line or set hv-vendor-id on the cpu
You are correct! I believe what ended up making things work was enabling "Primary GPU" in the GUI of Proxmox. I believe this is equal to the 'x-vga=1' flag. Everything is working beautifully now!
 
Hey obss,

I switched back to Intel, but I had it working with my 3700x. Inside of the Proxmox Web GUI, my currently working VM with an rtx 2060 has x-vga=1 in the PCI device configuration. I believe this gets added and removed when you uncheck and re-check the "primary GPU" box, inside of the menu of that same PCI device.

I will say, that I have always used a ROM file from my GPU. That is an optional section in the guide. The Proxmox terminal method worked for me, but I have struggled to recreate the .bin file since then. I have just kept my rom.bin file safe. If you also struggle creating it, I would recommend booting your desktop from another storage device, into Windows, and dumping the bin file with GPU-Z.

I recommend installing a CPU based remote desktop service, once you get into the VM. It might safe you at some point in the future. I just use TightVNC.

--- Edit-----

Also, very important, is if you uncheck the Primary GPU box, I believe you can add a "Display" device to the VM, and still use the console in the web gui to install Windows. You can install the Nvidia Driver, and then when you are ready, you shut down the VM, set the Display Device to "none", and then re-check the Primary GPU box on your Nvidia 2080 Ti.
 
Last edited:
It should actually work. You should be able to install with the Nvidia GPU, and after passing it through to vfio, you will lose your local terminal when the computer starts up. You can still SSH though. If you want the local terminal at a later time, you have to disable the vfio binding on boot.

What I recently discovered is that I couldn't get through the proxmox installation with my 2060. I'm almost positive I got through it before, but now it freezes with green lines, so I switched back to Intel.
 
I can't connect to ssh and rdp. I also plugged in a different graphics card to the computer does not run it at the same time. I guess it's a problem due to the lack of graphics feature in the AMD processor.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!