LCX passthrough for device not avaliable

phoniclynx

New Member
Sep 21, 2024
10
0
1
I'm trying to pass through a GPU device to an LCX container but I can't seem to find the device to pass through to the container.

lspci -nnk:
```
Code:
64:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Phoenix3 [1002:1900] (rev c5)
        Subsystem: Device [2014:8001]
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu
64:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt Radeon High Definition Audio Controller [1002:1640]
        Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt Radeon High Definition Audio Controller [1002:1640]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel

I have enabled vfio for a Windows machine that is currently turned off and only turned on when needed. Not that I can get that working stable.

But according to another post, I should be able to load the vfio drivers first and then have the Kernel drivers load afterwards which should give me the /dev/dri. But nothing appears as a device.

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Code:
[    0.059593] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR0, rdevid:160
[    0.059595] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR1, rdevid:160
[    0.059597] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR2, rdevid:160
[    0.059598] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR3, rdevid:160
[    0.059599] AMD-Vi: Using global IVHD EFR:0x246577efa2054ada, EFR2:0x0
[    0.583253] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.586255] AMD-Vi: Extended features (0x246577efa2054ada, 0x0): PPR NX GT IA GA PC
[    0.586265] AMD-Vi: Interrupt remapping enabled
[    0.589601] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

lspci -k | grep -A 3 "VGA"
Code:
pcilib: Error reading /sys/bus/pci/devices/0000:00:08.3/label: Operation not permitted
64:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Phoenix3 (rev c5)
        Subsystem: Device 2014:8001
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu


I (not knowing what I'm doing) tried to pass through "/sys/bus/pci/devices/0000:64:0000/" but it would not allow me to enter that as a value in the LCX Resources tab. What do I need to do?
 
Last edited:
You cannot passthrough a PCIe device to a container, it has no kernel and therefore does not know how to "speak" to the device. Only QEMU/KVM VMs can accept a pcie-passthroughed device and "speak" to it through the guest kernel. For LX(C) containers, you need to passthrough the kernel device and not the pcie device. This depends heavily on the device. This thread talk about what you want.
 
I think you've missed my point.. I no longer have anything in /dev/dri ... there is simply nothing in there so I can't pass that to the LXC

Code:
root@proxmox:~# ls -l /dev/dri/
ls: cannot access '/dev/dri/': No such file or directory
root@proxmox:~#
 
Last edited:
Please help me understand, because as far as I know I don't have it blacklisted?

Code:
root@proxmox:/etc/modprobe.d# ls -l
total 14
-rw-r--r-- 1 root root 154 Aug 22 10:01 amd64-microcode-blacklist.conf
-rw-r--r-- 1 root root 172 Apr 24 05:03 pve-blacklist.conf
-rw-r--r-- 1 root root 149 Oct  5 19:34 vfio.conf
-rw-r--r-- 1 root root  35 Sep 23 19:19 zfs.conf
root@proxmox:/etc/modprobe.d# cat vfio.conf
options vfio-pci ids=1002:1900,1002:1640 disable_vga=1
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci
root@proxmox:/etc/modprobe.d# cat pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE


# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb

what part am I missing?
 
The vfio binds the device so that no other driver can claim it. You need to change that in order to get it back in your hypervisor.

Why do you want to have it in LX(C) containers an QEMU/KVM VMs? Why don't just stick with VMs? You could just install the container in your guest and then have a working LX(C) passthrough inside of a guest. That may the easiest way to archieve both.
 
To be fair, I used one of the Helper Scripts for proxmox that creates a LCX container for Plex
Okay, so maybe running a virtualized PVE with plex in your PVE is a potential way to go? With this, you have "only" PCIe passthrough and do not need to jump through many hoops to switch PCIe and LX(C) containers.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!