GPU Passthrough on AMD Machine

hieve

New Member
May 29, 2017
15
1
3
31
Hey there,

I'm currently trying to setup a cheap & simple Server for some Testing purposes.
I came up with some problems with setting up GPU Passthrough.

The Hardware im using:

Processor : AMD FX-4300 (Virtualization capable, wiki said FX-Series is capable of AMD-Vi)
Motherboard : ASROCK N68-GS4 FX ( Bios has the Option "Secure Virtual Machine", unknown if its really supported as there is no info about it on the web/manuals :/ + upgraded to latest BIOS version)
GPU : Nvidia GT 1030 (the card was added a while after the proxmox install, if it helps in any way)

I have tried to follow the wiki and came up with the error
"TASK ERROR: Cannot open iommu_group: No such file or directory"
and there is really no iommu grouping for this machine.

this is what I get when I look for IOMMU (and there is like I said only the option Secure Virtual Machine in the BIOS)
dmesg | grep -e DMAR -e IOMMU
[ 0.000000] AGP: Please enable the IOMMU option in the BIOS setup
[ 0.692942] PCI-DMA: using GART IOMMU.
[ 0.692944] PCI-DMA: Reserving 64MB of IOMMU area in the AGP aperture


I hope someone can help me or maybe confirm that "Secure Virtual Machine" is not the right one needed, as there is no other option in the bios.
It's a cheap consumer board, so they might have stripped that out and I need to change the motherboard...

it really confuses me that there is no indiciation on the motherboard's if they support it or not, or I might just haven't found it yet(especially with customer hardware).

This machine has cost around 250-400€ (depending on the hdd's/ssd's) , so I would change the MB for another one. If anyone here has some good and cheap for AM3+ with factor uATX would be happy about any advise!

Edit (some additional infos) :

pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-1 (running version: 4.3-1/e7cdc165)
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-88
pve-firmware: 1.1-9
libpve-common-perl: 4.0-73
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-61
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-6
pve-container: 1.0-75
pve-firewall: 2.0-29
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
 
Last edited:
Hi
does your system support AMD-Vi?
If not pcie pass-through will not work.
 
Hi
does your system support AMD-Vi?
If not pcie pass-through will not work.

yeah, it was not.
the IOMMU option has to be mentioned in the BIOS, just Secure Virtual Machine won't work.

some vendors seem to write a step-by-step guide for the bios in their manuals, if you find them online, you can look for IOMMU via quick search.
I swapped now the Mainboard to a asrock 970m pro3 micro atx , which was the only one with micro atx factor that was available from my favourite vendors and supports IOMMU.

the passthrough now works partial,
I can see the Graphics Card in my Win10 Virtual Machine, but the drivers have stopped working with Error Code 43.
I am already looking on the internet for guidelines how to fix that.
It looks like Nvidia doesn't like Virtual Machines and stops the driver from working.



I need to change hv_vendor_id=Nvidia43FIX in the qemu startup line, but how can I do this? the current one has hv_vendor_id=proxmox in and I can't find where this is stored.
 
Last edited:
You can override is with args.
get the --cpu setting of your VM

qm showcmd <VMID> | grep --color -e "--cpu '\S*'"

now copy the colored string and past in in this command
qm set <VMID> --args "<colored string>,hv_vendor_id=Nvidia43FIX"
 
You can override is with args.
get the --cpu setting of your VM

qm showcmd <VMID> | grep --color -e "--cpu '\S*'"

now copy the colored string and past in in this command
qm set <VMID> --args "<colored string>,hv_vendor_id=Nvidia43FIX"


qm showcmd 105 | grep --color -e "--cpu '\S*'"

gives me nothing with a fresh vm.
but I can do qm show/showcmd 105 (btw. the commands show/showcmd give the same output?)
and when i filter manually for -cpu I have this(fresh win10vm,cpu=host):
-cpu host,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi
if i now append my wanted option via
qm set 105 --args "-cpu host,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=Nvidia43FIX"

i have the following in my parameters:
-cpu host,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi -m 8192 -k de -cpu host,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=Nvidia43FIX

the cpu option appends and gets doubled?



i have tried abit and looked through all details again, it looks like my card is bound via pci stub and vfio

root@proxmox:~# dmesg | grep -i pci-stub [ 2.115927] pci-stub: add 10DE:1D01 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 2.115943] pci-stub 0000:01:00.0: claimed by stub
[ 2.115950] pci-stub: add 10DE:0FB8 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[ 2.115958] pci-stub 0000:01:00.1: claimed by stub
root@proxmox:~# dmesg | grep -i vfio [ 2.922139] VFIO - User Level meta-driver version: 0.3
[ 2.925146] vfio_pci: add [10de:1d01[ffff:ffff]] class 0x000000/00000000
[ 2.925150] vfio_pci: add [10de:0fb8[ffff:ffff]] class 0x000000/00000000
[ 2062.756376] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
[ 2064.506169] vfio-pci 0000:01:00.0: Invalid ROM contents
[ 2064.506268] vfio-pci 0000:01:00.0: Invalid ROM contents
[ 2305.103417] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
[ 2306.857455] vfio-pci 0000:01:00.0: Invalid ROM contents
[ 2306.857622] vfio-pci 0000:01:00.0: Invalid ROM contents


edit: I've tried more but unsucessful so far. It's still showing Code43

configs(after Host reboot):
dmesg | grep AMD-Vi
Code:
[    1.097469] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
[    1.097470] AMD-Vi: Interrupt remapping enabled
[    1.097581] AMD-Vi: Lazy IO/TLB flushing enabled
/etc/default/grub
Code:
9 GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
/etc/pve/qemu-server/302.conf
Code:
args: -cpu host,kvm=off,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=Nvidia43FIX
balloon: 0
bios: ovmf
bootdisk: sata0
cores: 4
cpu: host
hostpci0: 01:00,pcie=1,x-vga=on
ide2: hdd1:iso/Windows10-64bit.iso,media=cdrom
machine: q35
memory: 8192
name: win10
net0: e1000=9E:3A:CA:58:4F:50,bridge=vmbr0
numa: 0
ostype: win8
sata0: hdd1:302/vm-302-disk-1.qcow2,cache=writeback,size=128G
smbios1: uuid=4bf4fac1-b1e3-447b-ac30-970370b24b99
sockets: 1
dmesg | grep ecap shows nothing after reboot , but when the Guest is booted there is:
Code:
[  587.984118] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19@0x900
dmesg | grep vfio
Code:
[    2.954899] vfio_pci: add [10de:1d01[ffff:ffff]] class 0x000000/00000000
[    2.954905] vfio_pci: add [10de:0fb8[ffff:ffff]] class 0x000000/00000000
dmesg | grep pci-stub
Code:
[    2.145150] pci-stub: add 10DE:1D01 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[    2.145166] pci-stub 0000:01:00.0: claimed by stub
[    2.145176] pci-stub: add 10DE:0FB8 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[    2.145187] pci-stub 0000:01:00.1: claimed by stub
qm showcmd 302
Code:
Use of uninitialized value $data in split at /usr/share/perl5/PVE/JSONSchema.pm line 512.
using uefi without permanent efivars disk
/usr/bin/kvm -id 302 -chardev socket,id=qmp,path=/var/run/qemu-server/302.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/302.pid -daemonize -smbios type=1,uuid=4bf4fac1-b1e3-447b-ac30-970370b24b99 -drive if=pflash,unit=0,format=raw,readonly,file=/usr/share/kvm/OVMF_CODE-pure-efi.fd -drive if=pflash,unit=1,format=raw,file=/tmp/302-ovmf.fd -name win10 -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vga none -nographic -no-hpet -cpu host,hv_vendor_id=proxmox,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,kvm=off -m 8192 -k de -cpu host,kvm=off,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=Nvidia43FIX -readconfig /usr/share/qemu-server/pve-q35.cfg -device usb-tablet,id=tablet,bus=ehci.0,port=1 -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on -device vfio-pci,host=01:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1 -iscsi initiator-name=iqn.1993-08.org.debian:01:ce47a7a612b5 -drive file=/hdds/hd1/template/iso/Windows10-64bit.iso,if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -device ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7 -drive file=/hdds/hd1/images/302/vm-302-disk-1.qcow2,if=none,id=drive-sata0,cache=writeback,format=qcow2,aio=threads,detect-zeroes=on -device ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100 -netdev type=tap,id=net0,ifname=tap302i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown -device e1000,mac=9E:3A:CA:58:4F:50,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -machine type=q35 -global kvm-pit.lost_tick_policy=discard
qm monitor 302
info pci
Code:
Bus  0, device   0, function 0:
    Host bridge: PCI device 8086:29c0
      id ""
  Bus  0, device  26, function 0:
    USB controller: PCI device 8086:2937
      IRQ 10.
      BAR4: I/O at 0xd100 [0xd11f].
      id "uhci-4"
  Bus  0, device  26, function 1:
    USB controller: PCI device 8086:2938
      IRQ 10.
      BAR4: I/O at 0xd0e0 [0xd0ff].
      id "uhci-5"
  Bus  0, device  26, function 2:
    USB controller: PCI device 8086:2939
      IRQ 11.
      BAR4: I/O at 0xd0c0 [0xd0df].
      id "uhci-6"
  Bus  0, device  26, function 7:
    USB controller: PCI device 8086:293c
      IRQ 11.
      BAR0: 32 bit memory at 0x92006000 [0x92006fff].
      id "ehci-2"
  Bus  0, device  27, function 0:
    Audio controller: PCI device 8086:293e
      IRQ 10.
      BAR0: 32 bit memory at 0x92000000 [0x92003fff].
      id "audio0"
  Bus  0, device  28, function 0:
    PCI bridge: PCI device 8086:3420
      BUS 0.
      secondary bus 1.
      subordinate bus 1.
      IO range [0xc000, 0xcfff]
      memory range [0x90000000, 0x910fffff]
      prefetchable memory range [0x800000000, 0x811ffffff]
      id "ich9-pcie-port-1"
  Bus  1, device   0, function 0:
    VGA controller: PCI device 10de:1d01
      IRQ 10.
      BAR0: 32 bit memory at 0x90000000 [0x90ffffff].
      BAR1: 64 bit prefetchable memory at 0x800000000 [0x80fffffff].
      BAR3: 64 bit prefetchable memory at 0x810000000 [0x811ffffff].
      BAR5: I/O at 0xc000 [0xc07f].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0007fffe].
      id "hostpci0.0"
  Bus  1, device   0, function 1:
    Audio controller: PCI device 10de:0fb8
      IRQ 10.
      BAR0: 32 bit memory at 0x91000000 [0x91003fff].
      id "hostpci0.1"
  Bus  0, device  28, function 1:
    PCI bridge: PCI device 8086:3420
      BUS 0.
      secondary bus 2.
      subordinate bus 2.
      IO range [0xb000, 0xbfff]
      memory range [0x91e00000, 0x91ffffff]
      prefetchable memory range [0xfffffffffff00000, 0x000fffff]
      id "ich9-pcie-port-2"
  Bus  0, device  28, function 2:
    PCI bridge: PCI device 8086:3420
      BUS 0.
      secondary bus 3.
      subordinate bus 3.
      IO range [0xa000, 0xafff]
      memory range [0x91c00000, 0x91dfffff]
      prefetchable memory range [0xfffffffffff00000, 0x000fffff]
      id "ich9-pcie-port-3"
  Bus  0, device  28, function 3:
    PCI bridge: PCI device 8086:3420
      BUS 0.
      secondary bus 4.
      subordinate bus 4.
      IO range [0x9000, 0x9fff]
      memory range [0x91a00000, 0x91bfffff]
      prefetchable memory range [0xfffffffffff00000, 0x000fffff]
      id "ich9-pcie-port-4"
  Bus  0, device  29, function 0:
    USB controller: PCI device 8086:2934
      IRQ 10.
      BAR4: I/O at 0xd0a0 [0xd0bf].
      id "uhci-1"
  Bus  0, device  29, function 1:
    USB controller: PCI device 8086:2935
      IRQ 10.
      BAR4: I/O at 0xd080 [0xd09f].
      id "uhci-2"
  Bus  0, device  29, function 2:
    USB controller: PCI device 8086:2936
      IRQ 11.
      BAR4: I/O at 0xd060 [0xd07f].
      id "uhci-3"
  Bus  0, device  29, function 7:
    USB controller: PCI device 8086:293a
      IRQ 11.
      BAR0: 32 bit memory at 0x92005000 [0x92005fff].
      id "ehci"
  Bus  0, device  30, function 0:
    PCI bridge: PCI device 8086:244e
      BUS 0.
      secondary bus 5.
      subordinate bus 8.
      IO range [0x6000, 0x8fff]
      memory range [0x91200000, 0x918fffff]
      prefetchable memory range [0xfffffffffff00000, 0x000fffff]
      id "pcidmi"
  Bus  5, device   1, function 0:
    PCI bridge: PCI device 1b36:0001
      IRQ 10.
      BUS 5.
      secondary bus 6.
      subordinate bus 6.
      IO range [0x8000, 0x8fff]
      memory range [0x91600000, 0x917fffff]
      prefetchable memory range [0xfffffffffff00000, 0x000fffff]
      BAR0: 64 bit memory at 0x91800000 [0x918000ff].
      id "pci.0"
  Bus  6, device   7, function 0:
    SATA controller: PCI device 8086:2922
      IRQ 10.
      BAR4: I/O at 0x8040 [0x805f].
      BAR5: 32 bit memory at 0x91620000 [0x91620fff].
      id "ahci0"
  Bus  6, device  18, function 0:
    Ethernet controller: PCI device 8086:100e
      IRQ 11.
      BAR0: 32 bit memory at 0x91600000 [0x9161ffff].
      BAR1: I/O at 0x8000 [0x803f].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
      id "net0"
  Bus  5, device   2, function 0:
    PCI bridge: PCI device 1b36:0001
      IRQ 11.
      BUS 5.
      secondary bus 7.
      subordinate bus 7.
      IO range [0x7000, 0x7fff]
      memory range [0x91400000, 0x915fffff]
      prefetchable memory range [0xfffffffffff00000, 0x000fffff]
      BAR0: 64 bit memory at 0x91801000 [0x918010ff].
      id "pci.1"
  Bus  5, device   3, function 0:
    PCI bridge: PCI device 1b36:0001
      IRQ 11.
      BUS 5.
      secondary bus 8.
      subordinate bus 8.
      IO range [0x6000, 0x6fff]
      memory range [0x91200000, 0x913fffff]
      prefetchable memory range [0xfffffffffff00000, 0x000fffff]
      BAR0: 64 bit memory at 0x91802000 [0x918020ff].
      id "pci.2"
  Bus  0, device  31, function 0:
    ISA bridge: PCI device 8086:2918
      id ""
  Bus  0, device  31, function 2:
    SATA controller: PCI device 8086:2922
      IRQ 10.
      BAR4: I/O at 0xd040 [0xd05f].
      BAR5: 32 bit memory at 0x92004000 [0x92004fff].
      id ""
  Bus  0, device  31, function 3:
    SMBus: PCI device 8086:2930
      IRQ 10.
      BAR4: I/O at 0xd000 [0xd03f].
      id ""
 
Last edited:
It's working now....but just with some help
I tried inserting a 2nd card(old 8800gt), which worked right out of the box, even in the 1st slot, and after that the 2nd(the gt 1030) also worked
8800gt 1st pcie slot(01:00) , 1030gt 2nd pcie slot(02:00 with 02:00.0 and 02:00.1)

my current mainboard (asrock 970m pro3) has no onboard graphics card, so it may happen here that the vbios is shadowed here and I get for this reason the "invalid rom contents" message from vfio at guest startup, the extracted vbios from the wiki does NOT work(just haven't tried gpu-z dump so far)

so.. my problem is now the 8800gt is no low profile card and does not fit in the case + It's not really having a usage atm.

I have tried inserting other PCI-devices, but besides when adding a 2nd graphics card, it will always map the gt1030 to 01:00 and will come up with the message:

[ 305.644526] vfio-pci 0000:01:00.0: Invalid ROM contents
[ 305.644837] vfio-pci 0000:01:00.0: Invalid ROM contents



edit :
Upgraded to latest Proxmox 5.0 Beta, probably same error, just description changed a bit(when not giving a explicit romfile):

[ 1778.220560] vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff
[ 1778.220665] vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff

same problem in the VM : code 43
even with a romfile, code 43


my kvm startup line(formatted by replacing space for \n via notepad++ to make it better readable)

/usr/bin/kvm
-id
304
-chardev
socket,id=qmp,path=/var/run/qemu-server/304.qmp,server,nowait
-mon
chardev=qmp,mode=control
-pidfile
/var/run/qemu-server/304.pid
-daemonize
-smbios
type=1,uuid=06e2f596-eeb6-4c97-aa19-38b92a485f82
-name
win7-test
-smp
4,sockets=1,cores=4,maxcpus=4
-nodefaults
-boot
menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg
-vga
none
-vnc
unix:/var/run/qemu-server/304.vnc,x509,password
-no-hpet
-cpu
host,hv_vendor_id=Nvidia43FIX,kvm=off

-m
8192
-k
de
-readconfig
/usr/share/qemu-server/pve-q35.cfg
-device
usb-tablet,id=tablet,bus=ehci.0,port=1
-device
vfio-pci,host=01:00.0,id=hostpci0.0,bus=pci.0,addr=0x10.0,multifunction=on,x-vga=on,romfile=/usr/share/kvm/vbios.bin
-device
vfio-pci,host=01:00.1,id=hostpci0.1,bus=pci.0,addr=0x10.1

-device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
-iscsi
initiator-name=iqn.1993-08.org.debian:01:d327afbdd154
-drive
file=/hdds/hd1/template/iso/virtio-win-0.1.126.iso,if=none,id=drive-ide0,media=cdrom,aio=threads
-device
ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200
-drive
file=/hdds/hd1/template/iso/win7x64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads
-device
ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=201
-drive
file=/hdds/hd1/images/304/vm-304-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on
-device
virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100
-netdev
type=tap,id=net0,ifname=tap304i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
-device
virtio-net-pci,mac=6E:5C:00:69:66:99,netdev=net0,bus=pci.0,addr=0x12,id=net0
-rtc
driftfix=slew,base=localtime
-machine
type=q35
-global
kvm-pit.lost_tick_policy=discard
 
Last edited:
omfg...
1 1/2 weeks of hard trial and error and reading nearly EVERYTHING to this topic I got it...

the problem:

Motherboard without Onboard Graphics Card and 1st Pci/e Device


What happens here...

>as soon as your system starts your initial grahics card vbios will be shadowed as written in the wiki and you might not be able to use the card for passthrough..

what can we do?

>getting a vbios rom and passing it to kvm - BUT you are only able todo when this card is a) not in the 1st pcie slot and b) not in use , you will get an rom , but it's a faulty one and won't work , but also gives no errors...


so how I solved it?

> putting in again my 2nd graphics card as 1st pcie device , ripping of the rom from the 2nd device (my targetet gpu device) , shutdown , put out the 2nd graphics card and put back my target card into 1st pci slot, adding romfile, starting vms , it works without problems and even after restart

but.. here are so many things you can stumble across you really can get crazy after a while

the only problem so far : qemu.conf won't work for some reason, the stable prox version won't accept my romfile and the beta is loading my vfio devices with false assignment

so I have used the qemu.conf (/etc/pve/qemu-server/>vm-id<.conf) and set up everything as far as it worked and then did ps -aux | grep "kvm -id >vm-id<" , copied it over to some text editor(notepad++) , changed everything for my case and launched it.


this is the final and working command line for my passthrough VM, i have marked which options I changed(for code43 fix / loading the gpu)
/usr/bin/kvm -id 304 -chardev socket,id=qmp,path=/var/run/qemu-server/304.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/304.pid -daemonize -smbios type=1,uuid=06e2f596-eeb6-4c97-aa19-38b92a485f82 -name win7-test -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vga none -vnc unix:/var/run/qemu-server/304.vnc,x509,password -no-hpet -cpu host,hv_vendor_id=Nvidia43FIX,kvm=off -m 8192 -k de -readconfig /usr/share/qemu-server/pve-q35.cfg -device usb-tablet,id=tablet,bus=ehci.0,port=1 -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=pci.0,addr=0x10.0,multifunction=on,x-vga=on,romfile=/usr/share/kvm/gt1030.rom -device vfio-pci,host=01:00.1,id=hostpci0.1,bus=pci.0,addr=0x10.1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:d327afbdd154 -drive file=/hdds/hd1/template/iso/virtio-win-0.1.126.iso,if=none,id=drive-ide0,media=cdrom,aio=threads -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200 -drive file=/hdds/hd1/template/iso/win7x64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=201 -drive file=/hdds/hd1/images/304/vm-304-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap304i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=6E:5C:00:69:66:99,netdev=net0,bus=pci.0,addr=0x12,id=net0 -rtc driftfix=slew,base=localtime -machine type=q35 -global kvm-pit.lost_tick_policy=discard
 
Code43 Error still there with SeaBios(win7) and omvf(win10).
I was able to install the Geforce Drivers and am seeing output of the monitor , so the device is working - now only Nvidias Virtual Detection Mechanism blocks my Graphics Card from whole usage(like geforce experience shows that there is always a new driver available, but it's installed).

If I look in Win10 Task Managers there is still "Virtual Computer : Yes"
it looks like kvm=off , does not work in my case and does not hide the Hypervisior correctly.

I have used this option also according to wiki:
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf


so are there any other parameters I can bring to hide the Hypervisior state?
(it is propably only needed for a one time installation of the drivers, so perfomance issues are OK with hiden kvm parameters)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!