GPU passthrough errors, no signal

maeffel

New Member
Apr 7, 2022
4
0
1
Hello dear people!
I managed to install macOS Monterey on my fresh Proxmox 7.1 installation.
Everything went fine, installing macOS through the noVNC console and setting up everything. Now its the last part to my success that does not work: the GPU passthrough.
Screenshot 2022-04-07 210700.png
The only things I changed in the configuration in the GUI were adding the PCI Device:
Screenshot 2022-04-07 210821.png
... and setting the display to none. Before, I enabled IOMMU. According to every tutorial I saw, that should be enough.
When I start the VM, this error message appears:
Code:
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -name macos-monterey -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=8b91a456-73e1-49b1-95d6-dcb633559c8b' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/pve/vm-100-disk-1' -smp '6,sockets=3,cores=2,maxcpus=6' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -cpu 'Penryn,enforce,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt,vendor=GenuineIntel' -m 16384 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=4f0987e3-0d2e-482d-8bee-55a05f9b4ffa' -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=0000:01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on' -device 'vfio-pci,host=0000:01:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' -device 'usb-host,hostbus=1,hostport=2,id=usb0' -device 'usb-host,hostbus=1,hostport=4,id=usb1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:4fdef895416a' -drive 'file=/dev/pve/vm-100-disk-0,if=none,id=drive-virtio0,cache=unsafe,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=6E:57:0A:EE:68:A9,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=101' -machine 'type=q35+pve0' -device 'isa-applesmc,osk=ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc' -smbios 'type=2' -device 'usb-kbd,bus=ehci.0,port=2' -global 'nec-usb-xhci.msi=off' -global 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' -cpu 'host,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc'' failed: got timeout

and the whole PVE installation crashes. The PC keeps running for a while and then restarts, then everything works again until I try to start the VM again.

Now I am getting no signal and ROM errors for PCiE.

My PC details are the following:
- Acer Predator Orion PO3-600
- Nvidia GTX 1070
- Intel Core i7-8700
- 16 GB Ram
- 120gb WD Green SSD (installed for this project)

I am not sure what data you need for helping me, I assume these could help:
Code:
root@proxmox:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nnks "${d##*/}"; done
IOMMU group 0 00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers [8086:3ec2] (rev 07)
        DeviceName:  Onboard Realtek Ethernet
        Subsystem: Acer Incorporated [ALI] 8th Gen Core Processor Host Bridge/DRAM Registers [1025:1289]
        Kernel driver in use: skl_uncore
        Kernel modules: ie31200_edac
IOMMU group 10 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 16)
        Subsystem: Acer Incorporated [ALI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [1025:1289]
        Kernel driver in use: r8169
        Kernel modules: r8169
IOMMU group 1 00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
        Kernel driver in use: pcieport
IOMMU group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1070] [10de:1b81] (rev a1)
        Subsystem: PC Partner Limited / Sapphire Technology GP104 [GeForce GTX 1070] [174b:1071]
        Kernel driver in use: nouveau
        Kernel modules: nvidiafb, nouveau
IOMMU group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
        Subsystem: PC Partner Limited / Sapphire Technology GP104 High Definition Audio Controller [174b:1071]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
IOMMU group 2 00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
        Subsystem: Acer Incorporated [ALI] Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [1025:1289]
IOMMU group 3 00:12.0 Signal processing controller [1180]: Intel Corporation Cannon Lake PCH Thermal Controller [8086:a379] (rev 10)
        Subsystem: Acer Incorporated [ALI] Cannon Lake PCH Thermal Controller [1025:1289]
        Kernel driver in use: intel_pch_thermal
        Kernel modules: intel_pch_thermal
IOMMU group 4 00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
        Subsystem: Acer Incorporated [ALI] Cannon Lake PCH USB 3.1 xHCI Host Controller [1025:1289]
        Kernel driver in use: xhci_hcd
        Kernel modules: xhci_pci
IOMMU group 4 00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
        Subsystem: Acer Incorporated [ALI] Cannon Lake PCH Shared SRAM [1025:1289]
IOMMU group 5 00:14.3 Network controller [0280]: Intel Corporation Wireless-AC 9560 [Jefferson Peak] [8086:a370] (rev 10)
        Subsystem: Intel Corporation Wireless-AC 9560 [Jefferson Peak] [8086:02a4]
        Kernel driver in use: iwlwifi
        Kernel modules: iwlwifi
IOMMU group 6 00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
        Subsystem: Acer Incorporated [ALI] Cannon Lake PCH HECI Controller [1025:1289]
        Kernel driver in use: mei_me
        Kernel modules: mei_me
IOMMU group 7 00:17.0 RAID bus controller [0104]: Intel Corporation SATA Controller [RAID mode] [8086:2822] (rev 10)
        DeviceName:  Onboard Intel SATA Controller
        Subsystem: Acer Incorporated [ALI] SATA Controller [RAID mode] [1025:1289]
        Kernel driver in use: ahci
        Kernel modules: ahci
IOMMU group 8 00:1c.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #6 [8086:a33d] (rev f0)
        Kernel driver in use: pcieport
IOMMU group 9 00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:a308] (rev 10)
        Subsystem: Acer Incorporated [ALI] Device [1025:1289]
IOMMU group 9 00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
        Subsystem: Acer Incorporated [ALI] Cannon Lake PCH cAVS [1025:1289]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel, snd_sof_pci_intel_cnl
IOMMU group 9 00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
        Subsystem: Acer Incorporated [ALI] Cannon Lake PCH SMBus Controller [1025:1289]
        Kernel driver in use: i801_smbus
        Kernel modules: i2c_i801
IOMMU group 9 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
        Subsystem: Acer Incorporated [ALI] Cannon Lake PCH SPI Controller [1025:1289]
Code:
BOOT_IMAGE=/boot/vmlinuz-5.13.19-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on
Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX=""
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Code:
/sys/kernel/iommu_groups/7/devices/0000:00:17.0
/sys/kernel/iommu_groups/5/devices/0000:00:14.3
/sys/kernel/iommu_groups/3/devices/0000:00:12.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/8/devices/0000:00:1c.0
/sys/kernel/iommu_groups/6/devices/0000:00:16.0
/sys/kernel/iommu_groups/4/devices/0000:00:14.2
/sys/kernel/iommu_groups/4/devices/0000:00:14.0
/sys/kernel/iommu_groups/2/devices/0000:00:08.0
/sys/kernel/iommu_groups/10/devices/0000:02:00.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1f.0
/sys/kernel/iommu_groups/9/devices/0000:00:1f.5
/sys/kernel/iommu_groups/9/devices/0000:00:1f.3
/sys/kernel/iommu_groups/9/devices/0000:00:1f.4

A thought of mine: I know, macOS struggles with Nvidia cards, but at least the Proxmox logo and the OpenCore bootloader should appear, am I right?
As I am new to Proxmox, please forgive me any mistakes. Hit me up if you need any more data to help me. I dont want to give up this exciting piece of software this early :)
Thank you very much in advance!!
 
Last edited:
How much memory does your system have? Acer Predator Orion PO3-600 does not tell me much, it look like there are multiple configuration possible... When you use PCI passthrough, all VM memory must be pinned into the actual memory of the system and you cannot overcommit.
I don't see the CPU integrated graphics in the IOMMU groups, please make sure to enable it and to use it for the Proxmox host console (and boot messages) instead of the NVidia GPU that you want to pass through. The IOMMU group of the Nvidia GPU looks fine.
Maybe you can share the VM configuration file from the /etc/pve/qemu-server/ directory? So we can see if you using hugepages.
Are there any errors in the system log around the time of starting the VM (use the journalctl command on the console)?
 
How much memory does your system have? Acer Predator Orion PO3-600 does not tell me much, it look like there are multiple configuration possible... When you use PCI passthrough, all VM memory must be pinned into the actual memory of the system and you cannot overcommit.
I don't see the CPU integrated graphics in the IOMMU groups, please make sure to enable it and to use it for the Proxmox host console (and boot messages) instead of the NVidia GPU that you want to pass through. The IOMMU group of the Nvidia GPU looks fine.
Maybe you can share the VM configuration file from the /etc/pve/qemu-server/ directory? So we can see if you using hugepages.
Are there any errors in the system log around the time of starting the VM (use the journalctl command on the console)?
Hello, forgot to mention that, added the info. I have 16 gigs of RAM. Looks like you already solved the crashing problem. I was stupid and assigned 16GiB but its effectively only 15.55 GiB. That being said, I tried 15.55GiB (crashed), 10 GiB and it stopped crashing. I am not sure if ballooning was activated but that was probably the reason for crashing. I now assigned 4096 MiB for testing, still no crashing.
Would you turn ballooning on or off? I have it on now.
Starting the VM with the setting above now works fine, but I still get no picture on my monitor. When I start the VM, the proxmox login disappears and it gets no signal anymore.
My CPU actually supports graphics, but as I have no graphics output on the motherboard I have no settings for that in the acer bios.
Afaik I dont need any integrated graphics for this, do I?
Code:
args: -device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleCo>
bios: ovmf
boot: order=virtio0;net0
cores: 2
cpu: Penryn
efidisk0: local-lvm:vm-100-disk-1,efitype=4m,size=4M
hostpci0: 0000:01:00,pcie=1,x-vga=1
machine: q35
memory: 4096
meta: creation-qemu=6.1.0,ctime=1649284456
name: macos-monterey
net0: virtio=6E:57:0A:EE:68:A9,bridge=vmbr0,firewall=1
numa: 0
ostype: other
parent: Fresh
scsihw: virtio-scsi-pci
smbios1: uuid=8b91a456-73e1-49b1-95d6-dcb633559c8b
sockets: 3
usb0: host=1-2
usb1: host=1-4
vga: memory=4
virtio0: local-lvm:vm-100-disk-0,cache=unsafe,discard=on,size=100G
vmgenid: 4f0987e3-0d2e-482d-8bee-55a05f9b4ffa

[PENDING]
vga: memory=16

[Fresh]
args: -device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleCo>
bios: ovmf
boot: order=virtio0;net0
cores: 2
cpu: Penryn
efidisk0: local-lvm:vm-100-disk-1,efitype=4m,size=4M
machine: q35
memory: 16384
meta: creation-qemu=6.1.0,ctime=1649284456
name: macos-monterey
net0: virtio=6E:57:0A:EE:68:A9,bridge=vmbr0,firewall=1
numa: 0
ostype: other
scsihw: virtio-scsi-pci
smbios1: uuid=8b91a456-73e1-49b1-95d6-dcb633559c8b
snaptime: 1649294178
sockets: 3
vga: vmware
virtio0: local-lvm:vm-100-disk-0,cache=unsafe,discard=on,size=100G
vmgenid: 4f0987e3-0d2e-482d-8bee-55a05f9b4ffa

Code:
Apr 07 23:04:32 proxmox pvedaemon[4436]: start VM 100: UPID:proxmox:00001154:00020428:6>
Apr 07 23:04:32 proxmox pvedaemon[1158]: <root@pam> starting task UPID:proxmox:00001154>
Apr 07 23:04:32 proxmox systemd[1]: Started 100.scope.
Apr 07 23:04:32 proxmox systemd-udevd[4449]: Using default interface naming scheme 'v24>
Apr 07 23:04:32 proxmox systemd-udevd[4449]: ethtool: autonegotiation is unset or enabl>
Apr 07 23:04:32 proxmox kernel: device tap100i0 entered promiscuous mode
Apr 07 23:04:32 proxmox systemd-udevd[4449]: ethtool: autonegotiation is unset or enabl>
Apr 07 23:04:32 proxmox systemd-udevd[4449]: ethtool: autonegotiation is unset or enabl>
Apr 07 23:04:32 proxmox systemd-udevd[4452]: ethtool: autonegotiation is unset or enabl>
Apr 07 23:04:32 proxmox systemd-udevd[4452]: Using default interface naming scheme 'v24>
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 1(fwln100i0) entered disabled state
Apr 07 23:04:32 proxmox kernel: device fwln100i0 entered promiscuous mode
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 1(fwln100i0) entered forwarding state
Apr 07 23:04:32 proxmox kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Apr 07 23:04:32 proxmox kernel: vmbr0: port 2(fwpr100p0) entered disabled state
Apr 07 23:04:32 proxmox kernel: device fwpr100p0 entered promiscuous mode
Apr 07 23:04:32 proxmox kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Apr 07 23:04:32 proxmox kernel: vmbr0: port 2(fwpr100p0) entered forwarding state
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 2(tap100i0) entered disabled state
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Apr 07 23:04:32 proxmox kernel: fwbr100i0: port 2(tap100i0) entered forwarding state
Apr 07 23:04:33 proxmox kernel: vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x19>
Apr 07 23:04:33 proxmox kernel: vfio-pci 0000:01:00.0: Invalid PCI ROM header signature>
Apr 07 23:04:34 proxmox pvedaemon[1158]: <root@pam> end task UPID:proxmox:00001154:0002>
Apr 07 23:04:36 proxmox QEMU[4445]: kvm: vfio-pci: Cannot read device rom at 0000:01:00>
Apr 07 23:04:36 proxmox QEMU[4445]: Device option ROM contents are probably invalid (ch>
Apr 07 23:04:36 proxmox QEMU[4445]: Skip option ROM probe with rombar=0, or load from f>
Apr 07 23:04:36 proxmox kernel: vfio-pci 0000:01:00.0: Invalid PCI ROM header signature>
lines 32539-32559/32559 (END)

something is going wrong in the end there....
Thank you very much for now
 
Ballooning does not work when you use passthrough. PCI(e) devices can access all memory of the VM at any time and therefore all memory must be in actual RAM (your physical 16GB). The VM cannot give memory back (by ballooning) to Proxmox, so there is no point in turning it on for that VM. Remember that Proxmox (and other VMs) also need memory.

When you passthrough the Nvidia GPU to the VM, you have no Proxmox console anymore. Also, the NVidia GPU might not reset properly when it is used by Proxmox before starting the VM. Therefore it is much easier to boot Proxmox with the integrated graphics from the CPU. This will increase your changes to get GPU passthrough working a lot.

The Invalid PCI ROM header signature does indicate a problem, but I don't know how to fix that right at this moment. Maybe you can also search for that text on this and other forums?
 
Ballooning does not work when you use passthrough. PCI(e) devices can access all memory of the VM at any time and therefore all memory must be in actual RAM (your physical 16GB). The VM cannot give memory back (by ballooning) to Proxmox, so there is no point in turning it on for that VM. Remember that Proxmox (and other VMs) also need memory.

When you passthrough the Nvidia GPU to the VM, you have no Proxmox console anymore. Also, the NVidia GPU might not reset properly when it is used by Proxmox before starting the VM. Therefore it is much easier to boot Proxmox with the integrated graphics from the CPU. This will increase your changes to get GPU passthrough working a lot.

The Invalid PCI ROM header signature does indicate a problem, but I don't know how to fix that right at this moment. Maybe you can also search for that text on this and other forums?
How do I set the proxmox console only to the integrated graphics? I don’t think that’s possible by Bios because it can’t even be discovered in the IOMMU groups.
I actually don’t need the console, is it therefore still worth setting up like that?
I already searched for the ROM error but didn’t find anything about my specific problem, maybe I’ll be renaming this thread again.
 
How do I set the proxmox console only to the integrated graphics? I don’t think that’s possible by Bios because it can’t even be discovered in the IOMMU groups.
Enable the integrated graphics in the BIOS of your motherboard, choose IGP over PCIe in some setting somewhere in the BIOS and connect a display to one of the outputs. That last part seems to be essential in some cases. According to Intel, your CPU has integrated graphics. If you can provide a link to the motherboard manual, I can give more specific instructions.
I actually don’t need the console, is it therefore still worth setting up like that?
As I said, GPU passthrough is simpler when the GPU is not touched by Proxmox (or booting the system). Also, the console gives a way to see and debug issues when trying to get passthrough to work.
EDIT: Search for Single GPU Passthrough experiences if you don't boot with the integrated graphics.
I already searched for the ROM error but didn’t find anything about my specific problem, maybe I’ll be renaming this thread again.
Some people have reported in this forum that they needed to patch their NVidia ROM file (not the actual GPU) to get it to work. I'm sorry but I have no experiences with NVidia cards.
 
Last edited:
Enable the integrated graphics in the BIOS of your motherboard, choose IGP over PCIe in some setting somewhere in the BIOS and connect a display to one of the outputs. That last part seems to be essential in some cases. According to Intel, your CPU has integrated graphics. If you can provide a link to the motherboard manual, I can give more specific instructions.

As I said, GPU passthrough is simpler when the GPU is not touched by Proxmox (or booting the system). Also, the console gives a way to see and debug issues when trying to get passthrough to work.
EDIT: Search for Single GPU Passthrough experiences if you don't boot with the integrated graphics.

Some people have reported in this forum that they needed to patch their NVidia ROM file (not the actual GPU) to get it to work. I'm sorry but I have no experiences with NVidia cards.
As I Said, I already looked through my BIOS and there is pretty sure no way to activate integrated graphics as my motherboard has no video output. Am I missing something? It’s useless then, isn’t it? Because I can’t see the console anyway because if have no output.
I will look in my BIOS again and look up the model.
Also I’ll have a look in that Nvidia patching, if someone has experience feel free to give me a hint ;)
Thank you
 
As I Said, I already looked through my BIOS and there is pretty sure no way to activate integrated graphics as my motherboard has no video output. Am I missing something? It’s useless then, isn’t it? Because I can’t see the console anyway because if have no output.
Sorry, my fault to assumed all Intel motherboards supported integrated graphics, but I guess Acer has made something special.
Maybe something like this can be of assistance? Its not for Proxmox but the GPU ROM dump, patching it and passing the patched ROM file can be done on Proxmox as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!