Help with amdgpu Passthrough for LXC

walber

New Member
Oct 23, 2023
3
1
3
Hello everybody.

I have a Ryzen 7 5700u mini pc and I'm trying to passthrough the gpu to an LXC with the frigate. Proxmox 8.0.9

I followed several tutorials, tried several different things, but there was always a point where I couldn't move forward.
https://forum.proxmox.com/threads/how-to-amdgpu-on-proxmox-7.95285/
Some files didn't exist in my installation and had to be created, I don't know to what extent this matters:

/etc/modprobe.d/pve-blacklist.conf:
Code:
# This file contains a list of modules which are not supported by Proxmox VE

# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel
blacklist snd_hda_codec
blacklist snd_hda_core
blacklist radeon
blacklist amdgpu

/etc/modprobe.d/vfio.conf
Code:
options vfio-pci ids=1002:164c,1002:1637
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci

/etc/modprobe.d/vfio-pci.conf is empty

/etc/modules
Code:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Code:
root@pve:~#  find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/17/devices/0000:04:00.6
/sys/kernel/iommu_groups/7/devices/0000:00:14.3
/sys/kernel/iommu_groups/7/devices/0000:00:14.0
/sys/kernel/iommu_groups/15/devices/0000:04:00.3
/sys/kernel/iommu_groups/5/devices/0000:00:08.0
/sys/kernel/iommu_groups/13/devices/0000:04:00.1
/sys/kernel/iommu_groups/3/devices/0000:00:02.1
/sys/kernel/iommu_groups/11/devices/0000:03:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.3
/sys/kernel/iommu_groups/8/devices/0000:00:18.3
/sys/kernel/iommu_groups/8/devices/0000:00:18.1
/sys/kernel/iommu_groups/8/devices/0000:00:18.6
/sys/kernel/iommu_groups/8/devices/0000:00:18.4
/sys/kernel/iommu_groups/8/devices/0000:00:18.2
/sys/kernel/iommu_groups/8/devices/0000:00:18.0
/sys/kernel/iommu_groups/8/devices/0000:00:18.7
/sys/kernel/iommu_groups/8/devices/0000:00:18.5
/sys/kernel/iommu_groups/16/devices/0000:04:00.4
/sys/kernel/iommu_groups/6/devices/0000:00:08.1
/sys/kernel/iommu_groups/14/devices/0000:04:00.2
/sys/kernel/iommu_groups/4/devices/0000:00:02.2
/sys/kernel/iommu_groups/12/devices/0000:04:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/10/devices/0000:02:00.0
/sys/kernel/iommu_groups/0/devices/0000:00:01.0
/sys/kernel/iommu_groups/9/devices/0000:01:00.0
root@pve:~#

Code:
root@pve:~#  dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[    0.128404] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR0, rdevid:160
[    0.128406] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR1, rdevid:160
[    0.128407] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR2, rdevid:160
[    0.128408] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR3, rdevid:160
[    0.128409] AMD-Vi: Using global IVHD EFR:0x206d73ef22254ade, EFR2:0x0
[    0.402035] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.402893] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.402895] AMD-Vi: Extended features (0x206d73ef22254ade, 0x0): PPR X2APIC NX GT IA GA PC GA_vAPIC
[    0.402904] AMD-Vi: Interrupt remapping enabled
[    0.402906] AMD-Vi: X2APIC enabled
[    0.667845] AMD-Vi: Virtual APIC enabled
[    0.668540] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
root@pve:~#

Code:
root@pve:~# lspci -nn | grep -e 'AMD/ATI'
04:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Lucienne [1002:164c] (rev c1)
04:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Renoir Radeon High Definition Audio Controller [1002:1637]
root@pve:~#

/etc/default/grub:
Code:
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=2
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"
GRUB_CMDLINE_LINUX=""

# If your computer has multiple operating systems installed, then you
# probably want to run os-prober. However, if your computer is a host
# for guest OSes installed via LVM or raw disk devices, running
# os-prober can cause damage to those guest OSes as it mounts
# filesystems to look for things.
#GRUB_DISABLE_OS_PROBER=false

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

As far as I understand, everything is correct up to this point, right?
If so, my problem starts here:
Code:
root@pve:~# ls -l /dev/dri
ls: cannot access '/dev/dri': No such file or directory
root@pve:~#

What I need to do?

Thanks
 
Last edited:
You want to:
passthrough the gpu to an LXC
but what you actually do with all the stuff you described is:
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
which is for VMs.
With the latter, one wants that the PVE-host does not use the specific device(s) at all; therefore the module-blacklisting and vfio-pci-binding, you did.
But for the former (what you want to do), the PVE-host absolutely needs full access of the specific device(s).
See the problem? ;)

Unfortunately, I have no experience with AMD GPUs/APUs and LXC-passthrough, sorry.
 
  • Like
Reactions: leesteken
Thanks for pointing out the stupid mistake, newbie here.
I was totally lost in the settings, thinking that part of the process for VM and LXC was the same.

As I was unable to revert the settings I made for the GPU, I did a new installation of Proxmox.

Now it's working.
 
  • Like
Reactions: Neobin
Thanks for pointing out the stupid mistake, newbie here.
I was totally lost in the settings, thinking that part of the process for VM and LXC was the same.

As I was unable to revert the settings I made for the GPU, I did a new installation of Proxmox.

Now it's working.
You got it working? what exactly did you do? i just picked up a 5800U and want to pass it thru to an LXC as well.
 
You got it working? what exactly did you do? i just picked up a 5800U and want to pass it thru to an LXC as well.

I just now saw your message.
Go to /etc/pve/nodes/pve/lxc
Open the file for your LXC and add the lines:



Code:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

If you have proxmox updated you can do it now through the UI:
https://github.com/tteck/Proxmox/discussions/2850

I haven't tested it this way yet
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!