i am currently trying to configure proxmox to use vGPU with LXCs and VMs simultaneously, i need to configure it for multiple VMs and containers.
i am using a Tesla P4, i have it working fine with VMs, i have found many tutorials online about passing GPUs into containers with LXC but they are for passing the whole gpu it seems and then nothing else can use it except the one container.
if i add this to my configuration, it works within the container but it passes the whole GPU
i would like a way to create a vGPU profile instance on something like /dev/nvidia-vgpu2 and pass that instead of the whole device
or is there another way to allow access to the gpu from inside the LXC without passing the entire device and locking it into the container?
EDIT: nvidia-smi works within the containers without the above code / passthrough but any applications that try to use the card fail
i am using a Tesla P4, i have it working fine with VMs, i have found many tutorials online about passing GPUs into containers with LXC but they are for passing the whole gpu it seems and then nothing else can use it except the one container.
if i add this to my configuration, it works within the container but it passes the whole GPU
Code:
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 507:* rwm
lxc.cgroup2.devices.allow: c 236:* rw
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvram nvram none bind,optional,create=file
i would like a way to create a vGPU profile instance on something like /dev/nvidia-vgpu2 and pass that instead of the whole device
or is there another way to allow access to the gpu from inside the LXC without passing the entire device and locking it into the container?
EDIT: nvidia-smi works within the containers without the above code / passthrough but any applications that try to use the card fail
Last edited: