[SOLVED] Proxmox 7.1 GPU Passthrough

Nov 29, 2021
8
0
6
43
Hello, I'm having an issue getting proxmox GPU passthrough to work with a newer re-installation that was working prior.

ThinkCentre M83 Tower
Lenovo SDK0E50510 WIN (SHARKBAY) Motherboard
UEFI Only
Intel i7-4790 CPU
250 GB Samsung SSD + 256 GB Seagate SSD (ZFS RAID1 - 250 GB)


I previously had proxmox 7 (I believe) and was using LVM for VM storage. I was able to get GPU passthrough working on the same machine using grub. My instructions below.

cp /etc/default/grub /root/grub-bkup
nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1"


update-grub
reboot


dmesg | grep -e DMAR -e IOMMU

cp /etc/modules /root/modules-bkup
nano /etc/modules
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd


# Modules required for Intel GVT
kvmgt
exngt
vfio-mdev


reboot

lspci -nnv | grep VGA
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06) (prog-if 00 [VGA controller])


*The PCI address of the iGPU is 00:02.0



I'm now trying to get it working with my newer install that was done using zfs, and as I understand it EFI for booting.

root@proxmox:~# efibootmgr -v
BootCurrent: 000E
Timeout: 1 seconds
BootOrder: 0000,000E,001E,001B,001D,0004,0005
Boot0000* Linux Boot Manager HD(2,GPT,4cb46da8-60b0-42e0-8c0a-cb78ae72d033,0x800,0x100000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI)
Boot0004* Generic Usb Device VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
Boot0005* CD/DVD Device VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
Boot000E* Samsung SSD 840 Series BBS(HD,,0x0)..BO
Boot001B* UEFI OS HD(2,GPT,4cb46da8-60b0-42e0-8c0a-cb78ae72d033,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)
Boot001D* UEFI OS HD(2,GPT,e09c9466-1cea-41fd-ae7f-72ef59a980e3,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)
Boot001E* Seagate SSD BBS(HD,,0x0)..BO

I'm tried the following instructions but Proxox GUI still show IOMMU is not enabled

cp /etc/kernel/cmdline /etc/kernel/cmdline-bkup
nano /etc/kernel/cmdline
quiet intel_iommu=on
iommu=pt
enable_gvt=1


proxmox-boot-tool refresh

dmesg | grep -e DMAR -e IOMMU

cp /etc/modules /root/modules-bkup
nano /etc/modules
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd


# Modules required for Intel GVT
kvmgt
exngt
vfio-mdev


reboot



Thanks for any help
 
These are my settings

cat /etc/kernel/cmdline
Code:
root=ZFS=rpool/ROOT/pve-1 boot=zfs i915.enable_gvt=1 kvm.ignore_msrs=1 intel_iommu=on iommu=pt

cat /etc/modules
Code:
kvmgt
xengt
vfio-mdev

vfio
vfio-iommu-type1
vfio-pci
vfio-virqfd

The last thing I do before reboot is proxmox-boot-tool refresh
You can try and see if update-initramfs -u -k all fixes anything.
 
Last edited:
Thanks a ton - that fixed it. I had a typo of exngt instead of xengt. Possibly auto-corrected by OneNote.
There seem to be as many webpages (more?) that list exngt instead of xengt in their sample /etc/modules. I am so glad you added this last comment.
 
There seem to be as many webpages (more?) that list exngt instead of xengt in their sample /etc/modules. I am so glad you added this last comment.
I followed this thread and fixed the "IOMMU not detected" issue when adding a PCI board to a VM, but I now get a strange error, when I start the VM i got errors and I need to phisically reboot the entire server in order to get control back.
Also checking into /sys/kernel/iommu_groups/ no PCI is added actually :-(
Any tips for this issue?
Code:
[185.096406] ata5.04: disabled
[185.096421] ata5.15: disabled
[185.125787] sd 4:4:0:0: [sda] Synchronizing SCSI cache
[185.125822] sd 4:4:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte-DID_BAD_TARGET driverbyte=DRIVER_OK
[185.125841] sd 4:4:0:0: [sda] Stopping disk
[185.125852] sd 4:4:0:0: [sda] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte-DRIVER_OK
[185.126097] ata6.00: disabled
[185.126179] ata6.o2: disabled
[185.126266] ata6.03: disabled
[185.126345] ata6.04: disabled
[185.126355] ata6.15: disabled
[185.165821] sd 5:0:0:0: [sdb] Synchronizing SCSI cache
[185.165851] sd 5:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte-DRIVER_OK
[185.165870] sd 5:0:0:0: [sdb] Stopping disk
[185.165881] sd 5:0:0:0: [sdb] Start/Stop Unit failed: Result: hostbyte-DID_BAD_TARGET driverbyte=DRIVER_OK
[185.189836] sd 5:2:0:0: [sdc] Synchronizing SCSI cache
[185.189866] sd 5:2:0:0: [sdc] Synchronize Cache(10) failed: Result: hostbyte-DID_BAD_TARGET driverbyte-DRIVER_OK
[185.189885] sd 5:2:0:0: [sdc] Stopping disk
[185.189896] sd 5:2:0:0: [sdc] Start/Stop Unit failed: Result: hostbyte-DID_BAD_TARGET driverbyte-DRIVER_OK
[185.229820] sd 5:3:0:0: [sdd] Synchronizing SCSI cache
[185.229848] sd 5:3:0:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte-DID_BAD_TARGET driverbyte=DRIVER_OK
[185.229B66] sd 5:3:0:0: [sdd] Stopping disk
[185.229878] sd 5:3:0:0: [sdd] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[185.305773] sd 5:4:0:0: [sde] Synchronizing SCSI cache
[185.305801] sd 5:4:0:0: [sde] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[185.305820] sd 5:4:0:0: [sde] Stopping disk
[185.305832] sd 5:4:0:0: [sde] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[185.888211] device tap100i0 entered promiscuous mode
[185.921093] vmbr0: port 2(fwpr100p0) entered blocking state
[185.921110] vmbr0: port 2(fupr100p0) entered disabled state
[185.921167] device fupr100p0 entered promiscuous mode
[185.921202] vmbr0: port 2(fwpr100p0) entered blocking state
[185.921213] vmbr0: port 2(fwpr100po) entered forwarding state
[185.925255] fwbr100i0: port 1 (fwin100i0) entered blocking state 185.9252711 fwbr100i0: port 1(fwin100i0) entered disabled state
[185.925311] device fwln100i0 entered promiscuous mode
[185.925338] fwbr10o10: port 1(fwln100i0) entered blocking state
[185.925349] fwbr100i0: port 1(fwin100i0) entered forwarding state
[185.929453] fwbr10o10: port 2(tap100i0) entered blocking state
[185.929469] fwbr100i0: port 2(tap100i0) entered disabled state
[185.929526] fwbr100i0: port 2(tap100i0) entered blocking state
[185.929538] fwbr100i0: port 2(tap100i0) entered forwarding state
 
I followed this thread and fixed the "IOMMU not detected" issue when adding a PCI board to a VM, but I now get a strange error, when I start the VM i got errors and I need to phisically reboot the entire server in order to get control back.
Also checking into /sys/kernel/iommu_groups/ no PCI is added actually :-(
Any tips for this issue?
Adding (or removing) PCI(e) devices can change the PCI ID of other devices (and this cause the network device name to change as well). Don't start VMs automatically on boot and check with lspci the new PCI ID numbers and change the VM configuration accordingly (for all VMs that use passthrough). Maybe other scripts or work-arounds also need to be adjusted.
 
Adding (or removing) PCI(e) devices can change the PCI ID of other devices (and this cause the network device name to change as well). Don't start VMs automatically on boot and check with lspci the new PCI ID numbers and change the VM configuration accordingly (for all VMs that use passthrough). Maybe other scripts or work-arounds also need to be adjusted.
Because I just installed proxmox I carefully configured without starting Truenas VM on boot.
I get this error each time I start the VM.

I followed your tips because I was not certain about the PCI ID and I confirm you I've added the right one, the SATA controller.
Should I go for direct HDD Passthrough instead of the PCI one? Afaik pci passthrough is recommended, am I wrong?
This is the way how I add the PCI to the VM
 

Attachments

  • Immagine 2023-05-02 154652.png
    Immagine 2023-05-02 154652.png
    99 KB · Views: 8
Because I just installed proxmox I carefully configured without starting Truenas VM on boot.
I get this error each time I start the VM.

I followed your tips because I was not certain about the PCI ID and I confirm you I've added the right one, the SATA controller.
Should I go for direct HDD Passthrough instead of the PCI one? Afaik pci passthrough is recommended, am I wrong?
This is the way how I add the PCI to the VM
Your issue does not appear to be related to this solved thread. Not all SATA controllers work with passthrough or are in IOMMU groups that contain other essential devices for the host. Or maybe you have host drives connected to the same SATA controller. Please search the forum SATA controller passthrough and disk passthrough, and start a new thread if you cannot resolve the problem.
 
  • Like
Reactions: albyg

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!