GPU + USB passthrough issue

argon1069

New Member
Jun 8, 2022
4
0
1
Hey guys I have been able to GPU/USB passthrough successfully to my VMs, I have a single GPU (NVIDIA GT 730) which I use for windows and sometimes for macOS vim with the open core.

The problem is if any HDMI/VGA connected to GPU or mouse/keyboard connected while proxmox (host OS) is booting, then GPU/USB passthrough doesn't work event `iommu` is disabled. which affects VM's getting crashes or doesn't boot at all. but if I plug these devices after proxmmox booted then they work totally fine.

Seems like to ownership of the connected devices is not dropped by host OS, please help me to fix this issue.
 
can you post the output of 'dmesg' for one boot where it's working and one where it's not ?
 
can you post the output of 'dmesg' for one boot where it's working and one where it's not ?

Another thing i noticed is when the VGA cable is connected while booting `IOMMU` is still enabled but when I boot VM which uses that VGA output, it crashes and after that `IOMMU` disabled, also, I have attached the logs
 

Attachments

Sounds like you are using pve-kernel 5.15 and the BOOTFB does not release the iomem (check with cat /proc/iomem) when a display is connected. If that's the case, then I know of only one work-around for NVidia GPUs.
yeah I also think that's the case, i have two questions:

## what should i verify in `cat /proc/iomem`, i saw the below logs
Code:
00000000-00000fff : Reserved
00001000-0009dfff : System RAM
0009e000-0009efff : Reserved
0009f000-0009ffff : System RAM
000a0000-000fffff : Reserved
  000a0000-000bffff : PCI Bus 0000:00
  00000000-00000000 : PCI Bus 0000:00
  00000000-00000000 : PCI Bus 0000:00
  00000000-00000000 : PCI Bus 0000:00
  00000000-00000000 : PCI Bus 0000:00
  000e0000-000fffff : PCI Bus 0000:00
    000f0000-000fffff : System ROM
00100000-36107fff : System RAM
36108000-3614cfff : Reserved
3614d000-3a364fff : System RAM
3a365000-3a365fff : Reserved
3a366000-3bd81fff : System RAM
3bd82000-3e281fff : Reserved
3e282000-3e501fff : ACPI Tables
3e502000-3e60cfff : ACPI Non-volatile Storage
3e60d000-3eefefff : Reserved
3eeff000-3effefff : Unknown E820 type
3efff000-3effffff : System RAM
3f000000-3fffffff : Reserved
40000000-dfffffff : PCI Bus 0000:00
  40000000-49ffffff : PCI Bus 0000:01
    40000000-47ffffff : 0000:01:00.0
      40000000-402fffff : BOOTFB
        40000000-402fffff : simplefb
    48000000-49ffffff : 0000:01:00.0
  4a000000-4b0fffff : PCI Bus 0000:01
    4a000000-4affffff : 0000:01:00.0
    4b000000-4b07ffff : 0000:01:00.0
    4b080000-4b083fff : 0000:01:00.1
  4b100000-4b1fffff : 0000:00:1f.3
  4b200000-4b2fffff : PCI Bus 0000:02
    4b200000-4b203fff : 0000:02:00.0
      4b200000-4b203fff : nvme
  4b300000-4b31ffff : 0000:00:1f.6
    4b300000-4b31ffff : e1000e
  4b320000-4b32ffff : 0000:00:14.0
    4b320000-4b32ffff : xhci-hcd
  4b330000-4b333fff : 0000:00:1f.3
    4b330000-4b333fff : ICH HD audio
  4b334000-4b337fff : 0000:00:14.2
  4b338000-4b339fff : 0000:00:17.0
    4b338000-4b339fff : ahci
  4b33a000-4b33a0ff : 0000:00:1f.4
  4b33b000-4b33b7ff : 0000:00:17.0
    4b33b000-4b33b7ff : ahci
  4b33c000-4b33c0ff : 0000:00:17.0
    4b33c000-4b33c0ff : ahci
  4b33d000-4b33dfff : 0000:00:16.0
    4b33d000-4b33dfff : mei_me
  4b33e000-4b33efff : 0000:00:14.2
  4b33f000-4b33ffff : 0000:00:1f.5
e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff]
  e0000000-efffffff : Reserved
    e0000000-efffffff : pnp 00:04
fc000000-fc00ffff : Reserved
  fc000000-fc00ffff : pnp 00:05
fd000000-fd68ffff : pnp 00:06
fd690000-fd69ffff : INT34C6:00
  fd690000-fd69ffff : INT34C6:00 INT34C6:00
fd6a0000-fd6affff : INT34C6:00
  fd6a0000-fd6affff : INT34C6:00 INT34C6:00
fd6b0000-fd6bffff : INT34C6:00
  fd6b0000-fd6bffff : INT34C6:00 INT34C6:00
fd6c0000-fd6cffff : pnp 00:06
fd6d0000-fd6dffff : INT34C6:00
  fd6d0000-fd6dffff : INT34C6:00 INT34C6:00
fd6e0000-fd6effff : INT34C6:00
  fd6e0000-fd6effff : INT34C6:00 INT34C6:00
fd6f0000-fdffffff : pnp 00:06
fe000000-fe010fff : Reserved
fe04c000-fe04ffff : pnp 00:06
fe050000-fe0affff : pnp 00:06
fe0d0000-fe0fffff : pnp 00:06
fe200000-fe7fffff : pnp 00:06
fec00000-fec00fff : Reserved
  fec00000-fec003ff : IOAPIC 0
fed00000-fed00fff : Reserved
  fed00000-fed003ff : HPET 0
    fed00000-fed003ff : PNP0103:00
fed10000-fed17fff : pnp 00:04
fed20000-fed7ffff : pnp 00:04
fed91000-fed91fff : dmar0
feda0000-feda0fff : pnp 00:04
feda1000-feda1fff : pnp 00:04
fee00000-fee00fff : Local APIC
  fee00000-fee00fff : Reserved
ff000000-ffffffff : Reserved
  ff000000-ffffffff : pnp 00:06
100000000-4bbffffff : System RAM
  411200000-4122025bf : Kernel code
  412400000-412df0fff : Kernel rodata
  412e00000-41323e4bf : Kernel data
  413536000-4139fffff : Kernel bss

## For the kernal, yes it is 5.15 pve kernel , is there anything else i can use for better performance? i installed this proxmox from the official proxmix ve iso, can you please suggest
 
40000000-dfffffff : PCI Bus 0000:00
40000000-49ffffff : PCI Bus 0000:01
40000000-47ffffff : 0000:01:00.0
40000000-402fffff : BOOTFB
40000000-402fffff : simplefb
48000000-49ffffff : 0000:01:00.0
If 01:00.0 is the GPU you want to passthrough, then you need to get simplefb to release the BOOTFB memory region.
Unfortunately, video=simplfb:off does not work. Either boot the system with another GPU as boot display or use a work-around.
 
Sounds like you are using pve-kernel 5.15 and the BOOTFB does not release the iomem (check with cat /proc/iomem) when a display is connected. If that's the case, then I know of only one work-around for NVidia GPUs.

This workaround didn't work, same issue with GPU, I found something similar on nicksherlock's gude: https://www.nicksherlock.com/2018/11/my-macos-vm-proxmox-setup/, will this work i'm not sure what new_id can be

Code:
#!/usr/bin/env bash

if [ "$2" == "pre-start" ]
then
# First release devices from their current driver (by their PCI bus IDs)
echo 0000:00:1d.0 > /sys/bus/pci/devices/0000:00:1d.0/driver/unbind
echo 0000:00:1a.0 > /sys/bus/pci/devices/0000:00:1a.0/driver/unbind
echo 0000:81:00.0 > /sys/bus/pci/devices/0000:81:00.0/driver/unbind
echo 0000:82:00.0 > /sys/bus/pci/devices/0000:82:00.0/driver/unbind
echo 0000:0a:00.0 > /sys/bus/pci/devices/0000:0a:00.0/driver/unbind

# Then attach them by ID to VFIO
echo 8086 1d2d > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 1d26 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 1b73 1100 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 144d a802 > /sys/bus/pci/drivers/vfio-pci/new_id
echo vfio-pci > /sys/bus/pci/devices/0000:0a:00.0/driver_override
echo 0000:0a:00.0 > /sys/bus/pci/drivers_probe
fi
 
This workaround didn't work, same issue with GPU, I found something similar on nicksherlock's gude: https://www.nicksherlock.com/2018/11/my-macos-vm-proxmox-setup/, will this work i'm not sure what new_id can be

Code:
#!/usr/bin/env bash

if [ "$2" == "pre-start" ]
then
# First release devices from their current driver (by their PCI bus IDs)
echo 0000:00:1d.0 > /sys/bus/pci/devices/0000:00:1d.0/driver/unbind
echo 0000:00:1a.0 > /sys/bus/pci/devices/0000:00:1a.0/driver/unbind
echo 0000:81:00.0 > /sys/bus/pci/devices/0000:81:00.0/driver/unbind
echo 0000:82:00.0 > /sys/bus/pci/devices/0000:82:00.0/driver/unbind
echo 0000:0a:00.0 > /sys/bus/pci/devices/0000:0a:00.0/driver/unbind

# Then attach them by ID to VFIO
echo 8086 1d2d > /sys/bus/pci/drivers/vfio-pci/new_id
echo 8086 1d26 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 1b73 1100 > /sys/bus/pci/drivers/vfio-pci/new_id
echo 144d a802 > /sys/bus/pci/drivers/vfio-pci/new_id
echo vfio-pci > /sys/bus/pci/devices/0000:0a:00.0/driver_override
echo 0000:0a:00.0 > /sys/bus/pci/drivers_probe
fi
I would expect Proxmox to do more or less the same (unbind current driver, rebind to vfio-pci), but if it works for you, who am I to complain.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!