PVE 6.2 GPU PCIe Passtrough issue

Myghalloween

Member
Oct 9, 2020
22
0
6
44
First, sorry for my bad english, I'm french (thanks Google trad).
I'm currently triying to follow the "how-to", but have issue because PVE don't see my GPU.
https://pve.proxmox.com/wiki/Pci_passthrough
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
My setup :
MB ASRock Z77 Extreme 4 UEFI with VT-d enable
CPU i7 3770 VT-x compatible
PVE 6.2
GPU ASUS 670 DCU II

Following the "How-To", IOMMU is enable in GRUB with this line :
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

GRUB updated with :
update-grub

After reboot, dmesg | grep -e DMAR -e IOMMU returns results.

Require modules in /etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Remapping seems to be supported because dmesg | grep 'remapping' returns DMAR-IR: Enabled IRQ remapping in x2apic mode

I don't know how to verify IOMMU isolation and if it's really necessary, but when I try to get my GPU ID with lspci -v, output don't let me see anything that match my GPU.

I try get my GPU ID by the VM (UEFI bios Win 10 x64) setting, but nothing too.
Is my GPU vbios no UEFI responsable ?
Do I have to reinstall PVE 6.2 with my MB in no UEFI mode and start VM on SeaBios ?

Need some helps please.
Thanks
 
I used to have a similar system. Please show us the output of for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done; Are you using ZFS, because then you might be using systemd-boot instead of GRUB?
You can start a VM with SeaBIOS or OVMF. You do not need to reinstall Proxmox. You do need to reinstall Windows if you switch from SeaBIOS to OVMF or the other way around for the VM.
 
thank for your help.
I just start with Proxmox and "linux" terminal command, so please give me a "step-by-step" commands.
I'm not using ZFS.
 
I just start with Proxmox and "linux" terminal command, so please give me a "step-by-step" commands.
I cannot give you the commands to get it working. I need information. Please do the following:
  1. Login to console on Proxmox host as root user.
  2. Run this command exactly: for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done;
  3. Copy the output to show us. Then we can see if IOMMU is working.
  4. Show us the configuration of your VM, which can be found in directory /etc/pve/qemu-server/
 
Hello,

I don't know how you know this command, but it cannot be invented! I would not have found it alone, that's for sure.
Thank you for your help. Here my Putty output :
Code:
login as: root
root@192.168.1.10's password:
Linux ATLANTA 5.4.65-1-pve #1 SMP PVE 5.4.65-1 (Mon, 21 Sep 2020 15:40:22 +0200) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Oct 12 01:14:48 2020
root@ATLANTA:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done;
IOMMU group 0 00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller [8086:0150] (rev 09)
IOMMU group 10 00:1c.5 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 6 [8086:1e1a] (rev c4)
IOMMU group 11 00:1c.7 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 8 [8086:1e1e] (rev c4)
IOMMU group 12 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04)
IOMMU group 13 00:1f.0 ISA bridge [0601]: Intel Corporation Z77 Express Chipset LPC Controller [8086:1e44] (rev 04)
IOMMU group 13 00:1f.2 SATA controller [0106]: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] [8086:1e02] (rev 04)
IOMMU group 13 00:1f.3 SMBus [0c05]: Intel Corporation 7 Series/C216 Chipset Family SMBus Controller [8086:1e22] (rev 04)
IOMMU group 14 03:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
IOMMU group 15 04:00.0 Ethernet controller [0200]: Broadcom Limited NetLink BCM57781 Gigabit Ethernet PCIe [14e4:16b1] (rev 10)
IOMMU group 16 05:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 01)
IOMMU group 17 07:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller [1b21:1042]
IOMMU group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)
IOMMU group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 670] [10de:1189] (rev a1)
IOMMU group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
IOMMU group 2 00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller [8086:0162] (rev 09)
IOMMU group 3 00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04)
IOMMU group 4 00:16.0 Communication controller [0780]: Intel Corporation 7 Series/C216 Chipset Family MEI Controller #1 [8086:1e3a] (rev 04)
IOMMU group 5 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04)
IOMMU group 6 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C216 Chipset Family High Definition Audio Controller [8086:1e20] (rev 04)
IOMMU group 7 00:1c.0 PCI bridge [0604]: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 1 [8086:1e10] (rev c4)
IOMMU group 8 00:1c.3 PCI bridge [0604]: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 4 [8086:1e16] (rev c4)
IOMMU group 9 00:1c.4 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 [8086:1e18] (rev c4)
root@ATLANTA:~#

what reassures me is that I finally see my GPU.
I post my VM configuration as soon as I can during this day.

Question : I've seen in some how-to on the web that it's necessary to blacklist the GPU(s) driver in order to PVE don't use GPU. Is it really useful to blacklist drivers? If later I use a CPU without graphic intregrated it can't be a problem ?
 
Good, you GPU (+audio) is isolated in a group without other devices. This means that the IOMMU is working.
Question : I've seen in some how-to on the web that it's necessary to blacklist the GPU(s) driver in order to PVE don't use GPU. Is it really useful to blacklist drivers? If later I use a CPU without graphic intregrated it can't be a problem ?
It depends. It always helps when a GPU is untouched by the host or BIOS. Also make sure you use another GPU during startup of your computer and Proxmox.
You can prevent the host from touching the GPU by blacklisting or by binding the vfio-pci driver to it. (Please never install the proprietary NVidia drivers on Proxmox!)

Binding the vfio-pci driver can be done adding a file to /etc/modprobe.d/ with a line like this for your specific device: options vfio-pci ids=10de:1189,10de:0e0a
To make sure that the vfio-pci driver is first, you might need to add another line like softdep nouveau pre: vfio_pci
Run update-initramfs -u and reboot for these changes to have effect. You can see which kernel drivers are possible and which are used with lspci -k
 
Hi,
Here is my VM config

Code:
root@ATLANTA:~# nano /etc/pve/qemu-server/702.conf
  GNU nano 3.2             /etc/pve/qemu-server/702.conf

agent: 1
audio0: device=intel-hda,driver=spice
bios: ovmf
bootdisk: virtio0

I try to assign my GPU but there is an error :
Code:
root@ATLANTA:~# echo "options vfio-pci ids=10de:1189,10de:0e0a" > /etc/mobprobe.d/vfio.conf
-bash: /etc/mobprobe.d/vfio.conf: No such file or directory
 
Last edited:
Your VM config looks incomplete: please show us the whole contents.
You have a typo: please try /etc/modprobe.d/vfio.conf (and don't forget update-initramfs -u).
 
Yes I was wrong with mobprode, you're right !

Full VM config :
Code:
root@ATLANTA:~# nano /etc/pve/qemu-server/702.conf
  GNU nano 3.2             /etc/pve/qemu-server/702.conf

agent: 1
audio0: device=intel-hda,driver=spice
bios: ovmf
bootdisk: virtio0
cores: 4
efidisk0: local-lvm:vm-702-disk-0,size=4M
machine: q35
memory: 4096
name: HomePlayStation
net0: virtio=3E:B8:E5:BE:B0:BA,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata2: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=53a650b1-d7ec-44a3-a391-9f47e31b7969
sockets: 1
vga: qxl,memory=32
virtio0: local-lvm:vm-702-disk-1,cache=writeback,size=60G
vmgenid: f41ffed9-261a-4f94-907e-ba8f3fdb7a94

I added this line too :
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

Where do I add softdep nouveau pre: vfio_pci ?
 
Last edited:
I added this line too :
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

Where do I add softdep nouveau pre: vfio_pci ?
Did you add that line, or did you execute that command on the console? The way you wrote it is a bit confusing to me.
You can add softdep lines also to /etc/modprobe.d/vfio.conf (or any other file in that directory. if you wish). You can learn more about it here.
I don't see any hostpci0 in your /etc/pve/qemu-server/702.conf. You can learn more about it here, in the section VM Configuration.
 
Sorry if I am not precise enough in my answers.
I execute on the Putty console echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
OK I'll add softdep in vfio.conf
hostpci0 does not appear, perhaps because I don't passthroug my GPU on this VM for the moment ?
 
Sorry if I am not precise enough in my answers.
I execute on the Putty console echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
Not a problem, I just wanted to make sure that I understand you.

OK I'll add softdep in vfio.conf
Don't forget to run update-initramfs -u and to reboot after making changes to files in /etc/modprobe.d.
hostpci0 does not appear, perhaps because I don't passthroug my GPU on this VM for the moment ?
Correct, you are not passing through a GPU to that VM. Please note that you probably have to use x-vga=on and you may have trouble with the NVidia drivers in the VM.
 
my new VM .conf :
I added cpu: host,hidden=1,flags=+pcid and hostpci0: 01:00,pcie=1,x-vga=on

Code:
root@ATLANTA:~# nano /etc/pve/qemu-server/702.conf
  GNU nano 3.2             /etc/pve/qemu-server/702.conf

agent: 1
audio0: device=intel-hda,driver=spice
bios: ovmf
bootdisk: virtio0
cores: 4
cpu: host,hidden=1,flags=+pcid
hostpci0: 01:00,pcie=1,x-vga=on
efidisk0: local-lvm:vm-702-disk-0,size=4M
machine: q35
memory: 4096
name: HomePlayStation
net0: virtio=3E:B8:E5:BE:B0:BA,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata2: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=53a650b1-d7ec-44a3-a391-9f47e31b7969
sockets: 1
vga: qxl,memory=32
virtio0: local-lvm:vm-702-disk-1,cache=writeback,size=60G
vmgenid: f41ffed9-261a-4f94-907e-ba8f3fdb7a94
 
Apparently you have to get the Nvidia driver to believe that it is not in a VM :oops:

I also read something that appeals to me: GPUs before 7xx are not compatible with OVMF, only Seabios. Is it true ? :rolleyes:

Edit : My first test is to install the Nvidia driver manually on my OVMF VM
 
Last edited:
This is just what I'm doing... Not easy
Maybe this post on level1tech forum might help?
Apparently you have to get the Nvidia driver to believe that it is not in a VM :oops:
I don't use Windows and I don't use NVidia (because of this and the binary driver shim issues), so I can't help you with this.
I also read something that appeals to me: GPUs before 7xx are not compatible with OVMF, only Seabios. Is it true ? :rolleyes:
You might be right. If the GPU is old then it might not have UEFI support in the firmware. I had a similar thing with a Radeon GPU, and it would not show the startup screen but worked fine once the driver is loaded (on Linux). If you switch your Windows VM from UEFI (OVMF) to BIOS (SeaBIOS), I think you need to reinstall and reactivate Windows.
 
Unfortunatly, no.
I don't use Windows and I don't use NVidia (because of this and the binary driver shim issues), so I can't help you with this.

You might be right. If the GPU is old then it might not have UEFI support in the firmware. I had a similar thing with a Radeon GPU, and it would not show the startup screen but worked fine once the driver is loaded (on Linux). If you switch your Windows VM from UEFI (OVMF) to BIOS (SeaBIOS), I think you need to reinstall and reactivate Windows.
It seems like PVE natively hide the VM to the nvidia's driver.
When I install manually the nvidia's driver, error code 43 disappears on devices manager, but after restart, code 43 is already here.
I'll create now a Seabios Win 10 VM for test (need to reinstall Win)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!