Hello,
does anyone have an idea why the group is always reset after every restart in my container or Host?
In my container (I do everything as non-root user)
Here you can see render group exist and my user is there:
sudo chown root:render /dev/dri/renderD128
sudo chown root:video /dev/dri/card0
ls -lah /dev/dri
crw-rw---- 1 root render 226, 128 Mar 2 06:07 renderD128
within my host:
nano /etc/udev/rules.d/99-render.rules
I add:
SUBSYSTEM=="drm", KERNEL=="renderD128", GROUP="render", MODE="0660"
udevadm control --reload-rules && udevadm trigger
Restart and it's back to:
crw-rw---- 1 root _ssh 226, 128 Mar 2 06:07 renderD128
My config of this container:
Should I just delete the container and then simply create a new container and do everything with root instead?
does anyone have an idea why the group is always reset after every restart in my container or Host?
In my container (I do everything as non-root user)
Here you can see render group exist and my user is there:
getent group | grep render
render:x:993:frigate-user:
drwxr-xr-x 3 root root 100 Mar 2 06:03 .
drwxr-xr-x 10 root root 660 Mar 2 06:07 ...
drwxr-xr-x 2 root root 80 Mar 2 06:07 by-path
crw-rw---- 1 root video 226, 0 Mar 2 06:07 card0
crw-rw---- 1 root _ssh 226, 128 Mar 2 06:07 renderD128
sudo chown root:render /dev/dri/renderD128
sudo chown root:video /dev/dri/card0
ls -lah /dev/dri
crw-rw---- 1 root render 226, 128 Mar 2 06:07 renderD128
within my host:
nano /etc/udev/rules.d/99-render.rules
I add:
SUBSYSTEM=="drm", KERNEL=="renderD128", GROUP="render", MODE="0660"
udevadm control --reload-rules && udevadm trigger
Restart and it's back to:
crw-rw---- 1 root _ssh 226, 128 Mar 2 06:07 renderD128
My config of this container:
arch: amd64
cores: 4
features: nesting=1
hostname: frigate
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.178.1,hwaddr=**:**:**:**:**:**,ip=192.168.178.***/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-zfs:subvol-109-disk-0,acl=1,mountoptions=discard;lazytime;noatime,replicate=0,size=20G
swap: 512
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id dev/serial/by-id none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1 dev/ttyUSB1 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0 dev/ttyACM0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1 dev/ttyACM1 none bind,optional,create=file
lxc.cgroup2.devices.allow: c 226:* rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file,mode=0660
Should I just delete the container and then simply create a new container and do everything with root instead?
Last edited: