Help properly mapping drive for unprivileged container

Rspin

New Member
Feb 18, 2023
6
0
1
Homelab enthusiast trying to learn Proxmox and LXC. Thought I was doing pretty well but hit a snag.

I'm trying to get my GPU to work correctly for Emby on my unprivileged Ubuntu LXC container. Found posts that showed my how to map my data drives and get the GPU drivers available in the container. As noted in other posts i am getting an error in emby that indicates that it cannot read /dev/dri/renderD128 which makes sense since in the container, ownership for that file is nobody:nobody. The fix suggested in other posts is to map the render group in host (108) to the emby group in the container(999). Note that my /mnt/pve/Data directory ownership is set to 1004:1004 which has no corresponding user or group on the host. This is mapped to the media user (1100) in the container. This works fine.

arch: amd64
cores: 4
features: nesting=1
hostname: TV
memory: 4096
mp0: /mnt/pve/Data,mp=/mnt/Data
net0: name=eth0,bridge=vmbr2,firewall=1,hwaddr=B2:0B:7E:B1:E2:A6,ip=dhcp,tag=20,type=>
ostype: ubuntu
rootfs: local-lvm:vm-200-disk-0,size=120G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 195:0 rw
lxc.cgroup2.devices.allow: c 195:255 rw
lxc.cgroup2.devices.allow: c 195:254 rw
lxc.cgroup2.devices.allow: c 508:0 rw
lxc.cgroup2.devices.allow: c 508:1 rw
lxc.cgroup2.devices.allow: c 10:144 rw
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create>
lxc.mount.entry: /dev/nvram dev/nvram none bind,optional,create=file
lxc.idmap: u 0 100000 1100
lxc.idmap: g 0 100000 999
lxc.idmap: g 999 108 1
#lxc.idmap: g 1000 1099 100
lxc.idmap: u 1100 1004 1
lxc.idmap: g 1100 1004 1
lxc.idmap: u 1101 101101 64430
lxc.idmap: g 1101 101101 64430
lxc.idmap: g 65534 165534 1
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/renderD128 none bind,optional,create=file

When I first tried this file, the container would not load and I got an error:
lxc_map_ids: 3701 newgidmap failed to write mapping "newgidmap: gid range [1000-1100) -> [1099-1199) not allowed": newgidmap 1227442 0 100000 999 999 108 1 1000 1099 100 1100 1004 1 1101 101101 64430 65534 165534 1
lxc_spawn: 1788 Failed to set up id mapping.
__lxc_start: 2107 Failed to spawn container "200"

As you can see from the above I commented out #lxc.idmap: g 1000 1099 100 and the container at least loaded and the drive mapping still works but the /dev/dri/renderD128 remains nobody:nobody so I apparently am doing something wrong.

Hoping a second set of eyes or better yet someone smarter than me (pretty low bar) can help.

Thanks in advance.
 
Hi,

lxc_map_ids: 3701 newgidmap failed to write mapping "newgidmap: gid range [1000-1100) -> [1099-1199) not allowed": newgidmap 1227442 0 100000 999 999 108 1 1000 1099 100 1100 1004 1 1101 101101 64430 65534 165534 1
This error occurs because
Code:
lxc.idmap: g 1000 1099 100
attempts to map container gids 1000-1099 (inclusive) to host gids 1099-1198 (inclusive). This mapping not allowed according to the subgid file, hence the error.

Regarding the /dev/dri appearing as nobody:nobody inside the container: Are you sure that the gid of the render group on the host is 108? On my machine, it is 103. You can check the gid of the owner of /dev/dri using the -n flag to ls:
Code:
ls -nl /dev/dri
 
Ughhhh. First thank you for responding. I could swear I double checked for the correct gid but I guess not. You are correct about the correct gid. Changing to 103 allowed the ownership in the container to nobody:emby which was what I was trying to accomplish. I guess the good news is that I seem to be getting the hang of doing this mapping as I pretty much had that part correct afterall.

Sadly this did not fix my underlying problem as I am still getting an error "Message": "Failed to initialize VA /dev/dri/renderD128. Error -1" in my emby hardware detection log.

Back to the emby forums to see if I can figure that one out.

Edit: Figured out where I got the 108 from. That is the gid for render in the container.

Thank you again. Please mark as solved.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!