mediated device passthrough to lxc container

Republicus

Well-Known Member
Aug 7, 2017
137
22
58
41
I wish to attach my mediated devices to lxc containers.

Since PVE uses it's own tooling pct are there equivalent commands to query mdev devices like lxc info --resources that would display the information I need?

Example:
Code:
~$ lxc info --resources
GPUs:
  Card 0:
    NUMA node: 0
    Vendor: Intel Corporation (8086)
    Product: UHD Graphics 630 (Desktop) (3e92)
    PCI address: 0000:00:02.0
    Driver: i915 (5.4.0-89-generic)
    DRM:
      ID: 0
      Card: card0 (226:0)
      Control: controlD64 (226:0)
      Render: renderD128 (226:128)
    Mdev profiles:
      - i915-GVTg_V5_4 (1 available)
          low_gm_size: 128MB
          high_gm_size: 512MB
          fence: 4
          resolution: 1920x1200
          weight: 4
      - i915-GVTg_V5_8 (2 available)
          low_gm_size: 64MB
          high_gm_size: 384MB
          fence: 4
          resolution: 1024x768
          weight:

And adding the device:
lxc config device add CONTAINER i915 gpu gputype=mdev mdev=i915-GVTg_V5_8 id=0

Can this be translated to a PVE lxc config?



The above are examples at Linux Containers: https://discuss.linuxcontainers.org/t/vgpu-passthrough-in-lxd-vm/14002/2
and https://discuss.linuxcontainers.org/t/vms-virgl-and-or-mdev-gpu-acceleration/12550
 
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
 
Thanks that works for dedicated GPU. I'm specifically looking for mdev devices.
 
hi,

i think there is a misunderstanding here... 'lxc' is the cli tool for 'lxd' which also can manage vms (not only containers) and both of the linked thread are about vms not containers

AFAIK there is no way to give a device as is to a container, since the kernel is the same, meaning the driver must be loaded on the host, which isn't really possible for mediated devices (as they don't appear as regular pci devices to the host)
 
Hi,
I'm trying to do same for my Frigate container, is that means that you can't use vgpu (mediated device) inside lxc or I didn't understand the process, Help would really appreciated here

Thanks
 
I'm trying to do same for my Frigate container, is that means that you can't use vgpu (mediated device) inside lxc or I didn't understand the process, Help would really appreciated here
exactly, you can only give a container access to a device node (living in /dev) and since the mediated devices are not actually pci devices with a driver, that does not work

it might be possible to bind mount the same dev node to multiple containers though (since the kernel driver handles it all anyway), did not try that yet though
 
Thanks @dcsapak that did clarify my misunderstanding. I was not aware lxd involved vms. I had believed lxd was a ubuntu flavor of lxc and my lxc experience is nearly entirely from use with proxmox.

I think this is technically possible as loom shared in his reply above. With some exclusions (such as my use case) where the NVIDIA vGPU KVM host driver does not match the linux guest driver version. In this case the guest driver throws a version mismatch.

There are more complex solutions I have read where vGPU driver features are merged with consumer drivers but I will not be exploring this with any urgency.
 
  • Like
Reactions: {c}guy_123
I am curious, were you able to get this to work?
I've got a nvidia card that I've successfully setup vGpu and shared it with three vm's but would love to be able to share one of those mediated devices with an lxc container (running plex) as well.. .
 
I am curious, were you able to get this to work?
I've got a nvidia card that I've successfully setup vGpu and shared it with three vm's but would love to be able to share one of those mediated devices with an lxc container (running plex) as well.. .
as i wrote above, it's not possible to give an mdev to a container since it's not a "real" pci device. you can only configre device nodes e.g /dev/dri/card0 via bind mounts
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!