GPU passthrough questions

retrojp

New Member
Apr 29, 2021
17
2
3
44
Scotland
I am thinking i may have to scale up on what I have, but i've a few questions first.

I currently have..

xeon i3-1271 v3
Gigabyte GA-B85M-D3H
16gb ddr3 1600 ram
Nvidea GTX760

This is in an older HTPC case, so hence the scaling up part. I've no room for a second GPU inside this case, and with the PCI slots i have left on this motherboard, it wouldn't need to be high end.

I have an older Nvidea GT610 which i've thought about trying alongside the GTX760, but i am not sure if it will work with it not being UEFI? I don't want to pass this GPU to the guest VM's, however i have read it may be easier if you have 2 GPU's. I would try it out, but i'll need a new case.

With my current set up, i am continually getting the error code 43 in the windows VM. So before i proceed with sourcing a low end UEFI card, and new case, i thought i'd double check with you guys on this syslog.

I think the "...resource sanity check..." and "...vfio_ecap_init: hiding ecap..." without initialising might be an issue? I've tried searching the web for these two, but not finding a fix. I'm also not 100% if this is right for the GPU audio "Kernel modules: snd_hda_intel" I've tried various GPU roms from techpowerup as well.

A snippet from syslog during windows 10 guest start up.

Code:
May 20 13:42:50 pve pvedaemon[1042]: <root@pam> starting task UPID:pve:000005B9:0000377C:60A6594A:qmstart:108:root@pam:
May 20 13:42:50 pve pvedaemon[1465]: start VM 108: UPID:pve:000005B9:0000377C:60A6594A:qmstart:108:root@pam:
May 20 13:42:50 pve systemd[1]: Created slice qemu.slice.
May 20 13:42:50 pve systemd[1]: Started 108.scope.
May 20 13:42:50 pve systemd-udevd[1477]: Using default interface naming scheme 'v240'.
May 20 13:42:50 pve systemd-udevd[1477]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 20 13:42:50 pve systemd-udevd[1477]: Could not generate persistent MAC address for tap108i0: No such file or directory
May 20 13:42:51 pve kernel: [  142.264381] device tap108i0 entered promiscuous mode
May 20 13:42:51 pve systemd-udevd[1477]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 20 13:42:51 pve systemd-udevd[1477]: Could not generate persistent MAC address for fwbr108i0: No such file or directory
May 20 13:42:51 pve systemd-udevd[1473]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 20 13:42:51 pve systemd-udevd[1473]: Using default interface naming scheme 'v240'.
May 20 13:42:51 pve systemd-udevd[1473]: Could not generate persistent MAC address for fwpr108p0: No such file or directory
May 20 13:42:51 pve systemd-udevd[1477]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 20 13:42:51 pve systemd-udevd[1477]: Could not generate persistent MAC address for fwln108i0: No such file or directory
May 20 13:42:51 pve kernel: [  142.286365] fwbr108i0: port 1(fwln108i0) entered blocking state
May 20 13:42:51 pve kernel: [  142.286367] fwbr108i0: port 1(fwln108i0) entered disabled state
May 20 13:42:51 pve kernel: [  142.286435] device fwln108i0 entered promiscuous mode
May 20 13:42:51 pve kernel: [  142.286466] fwbr108i0: port 1(fwln108i0) entered blocking state
May 20 13:42:51 pve kernel: [  142.286468] fwbr108i0: port 1(fwln108i0) entered forwarding state
May 20 13:42:51 pve kernel: [  142.288912] vmbr0: port 2(fwpr108p0) entered blocking state
May 20 13:42:51 pve kernel: [  142.288914] vmbr0: port 2(fwpr108p0) entered disabled state
May 20 13:42:51 pve kernel: [  142.288953] device fwpr108p0 entered promiscuous mode
May 20 13:42:51 pve kernel: [  142.288983] vmbr0: port 2(fwpr108p0) entered blocking state
May 20 13:42:51 pve kernel: [  142.288984] vmbr0: port 2(fwpr108p0) entered forwarding state
May 20 13:42:51 pve kernel: [  142.291163] fwbr108i0: port 2(tap108i0) entered blocking state
May 20 13:42:51 pve kernel: [  142.291164] fwbr108i0: port 2(tap108i0) entered disabled state
May 20 13:42:51 pve kernel: [  142.291230] fwbr108i0: port 2(tap108i0) entered blocking state
May 20 13:42:51 pve kernel: [  142.291231] fwbr108i0: port 2(tap108i0) entered forwarding state
May 20 13:42:52 pve kernel: [  143.644417] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
May 20 13:42:52 pve kernel: [  143.645545] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
May 20 13:42:52 pve kernel: [  143.645549] caller pci_map_rom+0x71/0x1a0 mapping multiple BARs
May 20 13:42:52 pve kernel: [  143.645550] vfio-pci 0000:01:00.0: No more image in the PCI ROM
May 20 13:42:53 pve pvedaemon[1042]: <root@pam> end task UPID:pve:000005B9:0000377C:60A6594A:qmstart:108:root@pam: OK
May 20 13:42:54 pve kernel: [  145.098209] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
May 20 13:42:54 pve kernel: [  145.098213] caller pci_map_rom+0x71/0x1a0 mapping multiple BARs
May 20 13:42:54 pve kernel: [  145.098215] vfio-pci 0000:01:00.0: No more image in the PCI ROM
May 20 13:42:54 pve kernel: [  145.098231] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
May 20 13:42:54 pve kernel: [  145.098233] caller pci_map_rom+0x71/0x1a0 mapping multiple BARs
May 20 13:42:54 pve kernel: [  145.098234] vfio-pci 0000:01:00.0: No more image in the PCI ROM
May 20 13:43:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 20 13:43:00 pve systemd[1]: pvesr.service: Succeeded.
May 20 13:43:00 pve systemd[1]: Started Proxmox VE replication runner.

Additional info:

Code:
root@pve:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
IOMMU group 0 00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v3 Processor DRAM Controller [8086:0c08] (rev 06)
IOMMU group 10 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 06)
IOMMU group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
IOMMU group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 760] [10de:1187] (rev a1)
IOMMU group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
IOMMU group 2 00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 04)
IOMMU group 3 00:16.0 Communication controller [0780]: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 [8086:8c3a] (rev 04)
IOMMU group 3 00:16.3 Serial controller [0700]: Intel Corporation 8 Series/C220 Series Chipset Family KT Controller [8086:8c3d] (rev 04)
IOMMU group 4 00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 04)
IOMMU group 5 00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 04)
IOMMU group 6 00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 [8086:8c10] (rev d4)
IOMMU group 7 00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 [8086:8c14] (rev d4)
IOMMU group 8 00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 04)
IOMMU group 9 00:1f.0 ISA bridge [0601]: Intel Corporation B85 Express LPC Controller [8086:8c50] (rev 04)
IOMMU group 9 00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 04)
IOMMU group 9 00:1f.3 SMBus [0c05]: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller [8086:8c22] (rev 04)

================================================================

root@pve:~# lspci  -v -s  $(lspci | grep ' VGA ' | cut -d" " -f 1)
01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: eVga.com. Corp. GK104 [GeForce GTX 760]
        Flags: fast devsel, IRQ 16
        Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
        Memory at e8000000 (64-bit, prefetchable) [size=128M]
        Memory at f0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at e000 [size=128]
        Expansion ROM at 000c0000 [disabled] [size=128K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Capabilities: [b4] Vendor Specific Information: Len=14 <?>
        Capabilities: [100] Virtual Channel
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] #19
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau

================================================================

01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1)
        Subsystem: eVga.com. Corp. GK104 [GeForce GTX 760]
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau
01:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
        Subsystem: eVga.com. Corp. GK104 HDMI Audio Controller
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel

================================================================

agent: 1
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'  (i've tried without this, doesn't seem to make a difference?)
balloon: 0
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: local-lvm:vm-108-disk-0,size=4M
hostpci0: 01:00,pcie=1,x-vga=1
ide2: none,media=cdrom
machine: pc-q35-5.2
memory: 8192
name: Win-PC
net0: virtio=4A:E5:D1:0E:16:F5,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsi0: local-lvm:vm-108-disk-1,size=160G
scsihw: virtio-scsi-pci
smbios1: uuid=d6936899-e0b6-466d-b41a-2220632116b6
sockets: 1
vga: none
vmgenid: 4c06c617-4c43-432e-93a4-2965771e3aef

================================================================

root@pve:~# dmesg | grep -e DMAR -e IOMMU
[    0.007680] ACPI: DMAR 0x00000000DEB76930 000080 (v01 INTEL  HSW      00000001 INTL 00000001)
[    0.042164] DMAR: IOMMU enabled
[    0.092531] DMAR: Host address width 39
[    0.092531] DMAR: DRHD base: 0x000000fed90000 flags: 0x1
[    0.092534] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap d2008c20660462 ecap f010da
[    0.092535] DMAR: RMRR base: 0x000000df6cf000 end: 0x000000df6ddfff
[    0.092536] DMAR-IR: IOAPIC id 8 under DRHD base  0xfed90000 IOMMU 0
[    0.092537] DMAR-IR: HPET id 0 under DRHD base 0xfed90000
[    0.092537] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.092735] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.737483] DMAR: No ATSR found
[    0.737511] DMAR: dmar0: Using Queued invalidation
[    0.739142] DMAR: Intel(R) Virtualization Technology for Directed I/O

================================================================

root@pve:~# cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:1187,10de:0e0a disable_vga=1

================================================================

root@pve:~# cat /etc/modprobe.d/blacklist.conf
blacklist radeon
blacklist nouveau
blacklist nvidia

================================================================

root@pve:~# cat /etc/modprobe.d/pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE

# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb

================================================================

root@pve:~# cat /etc/modprobe.d/kvm.conf
#options kvm ignore_msrs=1 (tried this uncommented)

================================================================

root@pve:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

================================================================

root@pve:~# cat /etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:off"
GRUB_CMDLINE_LINUX=""

================================================================

root@pve:~# cat /etc/pve/qemu-server/vmid.conf
hostpci0: 01:00

================================================================

Edited: sorry, i never put it in the code box
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!