[SOLVED] PCI pass-trough (bizar)

Struggling to get the PCI-passthrough for the dedicated GPU i have installed i ended up doing a few changes.

> Since i run on a system with zfs i am (sadly) forced to use systemd-boot which i extended with iommu=pt amd_iommu=on video=efifb:off
> i downloaded the bios for the GPU from Techpowerup since downloading it as documented did not work.
> i assigned the PCI device with all flags enabled (all functions, primary gpu, pci express)
> i appended romfile=amdgpu.bin to the relevant vmid.conf file

After these alterations i started the VM (Microsoft guest)

> which no longer finds the disk i used to install the OS.
> which displays the boot screen on the CONSOLE! this upsets me to a degree i cannot put into words right now (proposed fix: append , vesaf:eek:ff to cmdline string (which appears to work well, for now)

Any help with getting this setup to work is greatly appreciated.

Notable issue encountered is one i add a secondary PCI device for pass-through this is assigned the same PCI-ID, no matter what i try. When i check using 'dmesg| grep group' i notice all pci devices are assigned to a unique iommu group. Assuming this is fixed with the current boot parameters.

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

[ 0.789549] pci 0000:00:00.X: AMD-Vi: IOMMU performance counters supported
[ 0.790853] pci 0000:00:00.X: AMD-Vi: Found IOMMU cap 0x..
[ 0.790856] pci 0000:00:00.X: AMD-Vi: Extended features (0xf....):
[ 0.790860] AMD-Vi: Interrupt remapping enabled
[ 0.790862] AMD-Vi: Virtual APIC enabled
[ 0.790975] AMD-Vi: Lazy IO/TLB flushing enabled
[ 0.791297] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
 
Last edited:
I suppose you set iommu=pt and this is just a typo.

Did you see our docs for the rest of the configuration?
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough

oh, yes, indeed, a typo.

After much trial and error i ended up with a working configuration which is fast and stable.
I did check all available documents, this did not work as such for my setup at the time.
Based on the tests i performed and the experience i grew i do not think i can leave things out on this AMD CPU and AMD GPU machine.

When i ran opencl hashcat benchmarks the results were <1% different to a physical benchmark i found online. I am impressed.
 
  • Like
Reactions: Alwin
Backtracking on my own statements.

The cmdline in use now (very stable, not know if fast)

iommu=pt amd_iommu=on nomodeset nofb video=vesafb:off,efifb:off text

For /etc/modules i have


amd_iommu_v2 vfio vfio_iommu_type1 vfio_pci vfio_virqfd aufs overlay msr

for /etc/modprobe.d/passthrough.conf i have
options vfio-pci ids=...idinfohere.... disable_vga=1 disable_idle_d3=0 nointxmask=1

for /etc/fstab

hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=2021 0 0

Which leaves me back to investigate why prior cmdline strings did not work well.
I know one reason being i had configuration option kvm ... msrs .... something which does not work at all with an AMD CPU

if i am really honest, i have no clue why it is working now since i tried everything before and cmdline changes are minimal, i attribute a notable improvment to loading amd_iommu_v2. Noticeable here the system is now entirely stable since over a week. I almost do not dare to start working on the cmdline again to enable hugepages etc.
 
Last edited:
Nice that it worked out. Maybe a kernel update in between?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!