Not boot VM pcie passthrough (Xeon E-2314 + ASUS P12R-I + Gigabyte RTX 2060)

vvzvlad

New Member
Aug 10, 2023
1
0
1
I'm trying to configure pcie passthrough into a windows machine (win 11 pro) and I'm having trouble. I got it to work once, but then I deleted the configured VM by mistake and the second time I'm not succeeding, so I'm asking for help.




I have read these guides:
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
https://www.reddit.com/r/Proxmox/comments/lcnn5w/proxmox_pcie_passthrough_in_2_minutes/
https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/
https://forums.serverbuilds.net/t/guide-remote-gaming-on-unraid/4248/7
https://gist.github.com/briann/79aa21950336909de47c1dcad60bb7f7

Here is my system configuration:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
proxmox-kernel-6.2: 6.2.16-7
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.4
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.7
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-4
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

I use 1xXeon E-2314 on ASUS P12R-I (Intel C252 Chipset) and GPU Gigabyte RTX 2060 (GV-N2060D6-12GD)


Everything I found in the BIOS regarding IOMMU and VT-d is enabled.


Bash:
root@remedy:~# cat /etc/default/grub |grep GRUB_CMDLINE_LINUX_DEFAULT
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

Bash:
root@remedy:~# cat /etc/modules
...
coretemp
nct6775

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Bash:
root@remedy:~# lspci -nn
...
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2060 12GB] [10de:1f03] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1)
...

Bash:
root@remedy:~# cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:1f03,10de:10f9

Bash:
root@remedy:~# cat /etc/modprobe.d/blacklist.conf
blacklist nvidia
blacklist nouveau
blacklist radeon

After that I of course did update-grub && update-initramfs -u -k all

Bash:
root@remedy:~# dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[    0.009392] ACPI: DMAR 0x000000009E0A1000 000050 (v01 INTEL  EDK2     00000002      01000013)
[    0.009421] ACPI: Reserving DMAR table memory at [mem 0x9e0a1000-0x9e0a104f]
[    0.044406] DMAR: IOMMU enabled <<<<===
[    0.108302] DMAR: Host address width 39
[    0.108303] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.108308] DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.108311] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 0
[    0.108312] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.108313] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.109742] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    6.021019] DMAR: No RMRR found
[    6.021019] DMAR: No ATSR found
[    6.021020] DMAR: No SATC found
[    6.021021] DMAR: dmar0: Using Queued invalidation
[    6.021892] DMAR: Intel(R) Virtualization Technology for Directed I/O <<<<===

Bash:
root@remedy:~# pvesh get /nodes/remedy/hardware/pci --pci-class-blacklist ""
...
│ 0x020000 │ 0x1533 │ 0000:07:00.0 │         18 │ 0x8086 │ I210 Gigabit Network Connection                   │
├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────┼───
│ 0x030000 │ 0x1f03 │ 0000:01:00.0 │         14 │ 0x10de │ TU106 [GeForce RTX 2060 12GB]                     │
├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────┼───
│ 0x030000 │ 0x2000 │ 0000:04:00.0 │         16 │ 0x1a03 │ ASPEED Graphics Family                            │
├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────┼───
│ 0x040300 │ 0x10f9 │ 0000:01:00.1 │         14 │ 0x10de │ TU106 High Definition Audio Controller            │
├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────┼───
│ 0x050000 │ 0x43ef │ 0000:00:14.2 │          4 │ 0x8086 │ Tiger Lake-H Shared SRAM                          │
.....

Bash:
root@remedy:~# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/17/devices/0000:06:00.0
/sys/kernel/iommu_groups/7/devices/0000:00:17.0
/sys/kernel/iommu_groups/15/devices/0000:02:00.0
/sys/kernel/iommu_groups/5/devices/0000:00:15.1
/sys/kernel/iommu_groups/5/devices/0000:00:15.0
/sys/kernel/iommu_groups/5/devices/0000:00:15.3
/sys/kernel/iommu_groups/13/devices/0000:00:1f.0
/sys/kernel/iommu_groups/13/devices/0000:00:1f.5
/sys/kernel/iommu_groups/13/devices/0000:00:1f.4
/sys/kernel/iommu_groups/3/devices/0000:00:12.0
/sys/kernel/iommu_groups/11/devices/0000:00:1d.2
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/18/devices/0000:07:00.0
/sys/kernel/iommu_groups/8/devices/0000:00:19.0
/sys/kernel/iommu_groups/8/devices/0000:00:19.1
/sys/kernel/iommu_groups/16/devices/0000:03:00.0
/sys/kernel/iommu_groups/16/devices/0000:04:00.0
/sys/kernel/iommu_groups/6/devices/0000:00:16.4
/sys/kernel/iommu_groups/6/devices/0000:00:16.0
/sys/kernel/iommu_groups/6/devices/0000:00:16.1
/sys/kernel/iommu_groups/14/devices/0000:01:00.0
/sys/kernel/iommu_groups/14/devices/0000:01:00.1
/sys/kernel/iommu_groups/4/devices/0000:00:14.2
/sys/kernel/iommu_groups/4/devices/0000:00:14.0
/sys/kernel/iommu_groups/12/devices/0000:00:1d.3
/sys/kernel/iommu_groups/2/devices/0000:00:06.0
/sys/kernel/iommu_groups/10/devices/0000:00:1d.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1c.0



Bash:
root@remedy:~# lsmod |grep nvidia
root@remedy:~# lsmod |grep nouveau
root@remedy:~#



Bash:
root@remedy:~# lspci -nnk
...
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2060 12GB] [10de:1f03] (rev a1)
    Subsystem: Gigabyte Technology Co., Ltd TU106 [GeForce RTX 2060 12GB] [1458:4098]
    Kernel driver in use: vfio-pci
    Kernel modules: nvidiafb, nouveau
01:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1)
    Subsystem: Gigabyte Technology Co., Ltd TU106 High Definition Audio Controller [1458:4098]
    Kernel driver in use: vfio-pci
    Kernel modules: snd_hda_intel
...

Bash:
root@remedy:~# cat /proc/iomem
...
a0000000-dfffffff : PCI Bus 0000:00
  a0000000-a40fffff : PCI Bus 0000:03
    a0000000-a40fffff : PCI Bus 0000:04
      a0000000-a3ffffff : 0000:04:00.0
      a4000000-a403ffff : 0000:04:00.0
  a4100000-a42fffff : PCI Bus 0000:05
  a4300000-a4300fff : 0000:00:1f.5
    a4300000-a4300fff : 0000:00:1f.5 0000:00:1f.5
  a5000000-a60fffff : PCI Bus 0000:01
    a5000000-a5ffffff : 0000:01:00.0 <<<<===
    a6000000-a607ffff : 0000:01:00.0 <<<<===
    a6080000-a6083fff : 0000:01:00.1 <<<<===
  a6100000-a61fffff : PCI Bus 0000:07
    a6100000-a617ffff : 0000:07:00.0
      a6100000-a617ffff : igb
...



My virtual machine configuration:
Bash:
root@remedy:~# cat /etc/pve/qemu-server/104.conf
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=scsi0
cores: 4
cpu: IvyBridge,flags=+pcid
efidisk0: local-lvm:vm-104-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:01:00,device-id=0x1f03,pcie=1,romfile=Gigabyte.RTX2060.12G.rom,vendor-id=0x10de
machine: pc-q35-8.0
memory: 8000
meta: creation-qemu=8.0.2,ctime=1691756461
name: windows11
net0: virtio=16:38:BC:B1:4E:A6,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-104-disk-1,cache=writeback,discard=on,iothread=1,size=200G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=7157a82f-d136-4263-9029-e10c08b094e8
sockets: 1
vga: virtio
vmgenid: cccd2302-431f-4bf0-a5e4-f4a720201a57

I also downloaded the ROM from this site and tried adding it to the configuration (there is a file above), but it didn't help. I could not download with "cat "/sys/devices/pci". I've attached the rom file to the post.

Bash:
root@remedy:~# echo 1 > "/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom"
root@remedy:~# cat "/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom"
cat: '/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom': Input/output error

After trying to start the VM some time passes waiting for the VM to start, during which the VM tries to start (consuming memory and CPU) and then shuts down. In the log I get an error.
"TASK ERROR: start failed: command '/usr/bin/kvm -id 104 -name 'windows11,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'type=1,uuid=7157a82f-d136-4263-9029-e10c08b094e8' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/pve/vm-104-disk-0,size=540672' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/104.vnc,password=on' -cpu 'IvyBridge,enforce,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt,+pcid,vendor=GenuineIntel' -m 8000 -object 'iothread,id=iothread-virtioscsi0' -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=cccd2302-431f-4bf0-a5e4-f4a720201a57' -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=0000:01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=/usr/share/kvm/Gigabyte.RTX2060.12G.rom,x-pci-vendor-id=0x10de,x-pci-device-id=0x1f03' -device 'vfio-pci,host=0000:01:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' -device 'virtio-vga,id=vga,bus=pcie.0,addr=0x1' -chardev 'socket,path=/var/run/qemu-server/104.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:9a126917b2a7' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' -drive 'file=/dev/pve/vm-104-disk-1,if=none,id=drive-scsi0,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=16:38:BC:B1:4E:A6,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024' -machine 'type=pc-q35-8.0+pve0' -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'' failed: got timeout"
Please, help me. Me spend 4 days in trying VM start with GPU.
 

Attachments

  • Gigabyte.RTX2060.12G.rom.zip
    680.9 KB · Views: 1
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!