[SOLVED] LXC Container Issues (iGPU/tun passthrough and login delays)

Justify3020

New Member
Jan 1, 2024
2
0
1
Hi everyone!

As the title says, I am having two issues with my LXC containers.
The server runs Proxmox 8.1.3, with the Linux 6.5.11-6-pve kernel.

First issue - delay when logging into the containers.
This impacts all of the LXC containers (Debian 12 standard template), and it causes a delay whenever I try to login.
After I input the password in the web console or SSH, it takes around 15-20 seconds before I get a shell prompt.
I have tried reinstalling the containers multiple times but the issue manifests even in a fresh install.

Second issue - iGPU/tun device passthrough breaking on restart
I noticed this issue after restarting the Proxmox host.
Both the /dev/dri/* and /dev/net/tun devices were no longer present in their respective containers, even though they were working perfectly fine before.
After restarting the containers a few times everything went back to normal.
You can find below the configs for the containers in question:

Code:
arch: amd64
cores: 2
hostname: ext-prod-tscale
memory: 1024
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=36:5C:8D:75:56:A6,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-103-disk-0,size=16G
swap: 1024
tags: prd
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
lxc.mount.entry: /dev/net/tun /dev/net/tun none bind,create=file

Code:
arch: amd64
cores: 4
hostname: int-prod-jelly
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=4A:C6:E5:08:2E:38,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: ssd-store:vm-102-disk-0,size=36G
swap: 4096
tags: prd
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/ dev/dri none bind,optional,create=dir
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file

Thanks for reading this far. Any help or ideas are greatly appreciated.
 
First issue - delay when logging into the containers.
Debian containers require the nesting feature to be enabled. Did you enable nesting on the containers?

Second issue - iGPU/tun device passthrough breaking on restart
Starting with PVE 8.1, container device passthrough is supported. So you do not have to manually bind mount the device nodes into the containers and add allow rules for certain major and minor device numbers.
Instead you can just add the device to the container config like so:
Code:
pct set <container id> -dev<n> /dev/net/tun
n is just an arbitrary number from 0 to 255 to identify the device
 
Last edited:
  • Like
Reactions: Justify3020
Debian containers require the nesting feature to be enabled. Did you enable nesting on the containers?


Starting with PVE 8.1, container device passthrough is supported. So you do not have to manually bind mount the device nodes into the containers and add allow rules for certain major and minor device numbers.
Instead you can just add the device to the container config like so:
Code:
pct set <container id> -dev<n> /dev/net/tun
n is just an arbitrary number from 0 to 255 to identify the device
Amazing, thank you so much.

Nesting was indeed disabled and it was causing those weird delays.
Also I had no idea that you don't need manual bindings anymore.
Everything works like a charm now. Thanks again.
 
Starting with PVE 8.1, container device passthrough is supported. So you do not have to manually bind mount the device nodes into the containers and add allow rules for certain major and minor device numbers.
Instead you can just add the device to the container config like so:
Code:
pct set <container id> -dev<n> /dev/net/tun
n is just an arbitrary number from 0 to 255 to identify the device
So how to pass through physical network card eth1 to container OpenWRT?
Also no need of bind mount and allow rules etc?

Thanks.
 
  • Like
Reactions: vesalius

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!