[TUTORIAL] Ubuntu LXC CT setup with device passthrough on Proxmox VE 8.4 for Frigate installation

twoace88

New Member
Feb 6, 2025
5
0
1
Beginner-friendly guide for setting up Ubuntu LXC container (CT) with device passthrough on Proxmox VE 8.4 for Frigate installation.
Install everything step-by-step and include passthrough / enable Google Coral Edge TPU and OpenVino iGPU. The guide could also be used for generic device passthrough.
Hopefully this guide could help.

- Create and configure an LXC container with device passthrough in Proxmox VE 8.4
- Install Docker
- Install Frigate inside the container
- (Include) Set up Coral Edge TPU acceleration & OpenVino

Tutorial Video Guide:
https://youtu.be/qN6A4SQMK10

Related Resources:
Proxmox VE Device Passthrough Guide : https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_configuration_14
Frigate Installation Docs: https://docs.frigate.video/frigate/installation
Coral TPU Installation: https://coral.ai/docs/m2/get-started/#2a-on-linux

Reference Setup:
  • Lenovo P330 Tiny Workstation (ThinkStation) with Intel CPU i7–8700T and 32 MB RAM
  • M.2 Accelerator with Dual Edge TPU installed on M.2 NGFF E-key WiFi Network Card to M-key SSD Adapter
  • Proxmox VE 8.4.1
  • LXC Ubuntu 24.04-standard

Important to Check Prerequisite Steps in Proxmox VE Host


LXC CT Creation
  1. Download CT Template

    1749497209285.png
  2. Create CT
    1. Unprivileged container=Y , Nesting=Y
    2. Disks : rootfs — Configure Disk size as required (e.g. 32G)
    3. Disks : Add (Frigate storage recording) — Configure Disk size as required (e.g. 128G), Path=/mnt/frigate_storage
    4. Memory : Configure memory as required (e.g. 4GiB)
    5. Network : IPv4 — Configure as required (e.g. DHCP)
      1749495343355.png
      1749495362028.png
  3. Initial Setup Passthrough Device
    1. Get device information (group and group id) from host
      Bash:
      root@host:~# ls -al /dev/dritotal 0
      drwxr-xr-x  3 root root        100 Apr  1 12:28 .
      drwxr-xr-x 18 root root       4800 Apr  1 23:01 ..
      drwxr-xr-x  2 root root         80 Apr  1 12:28 by-path
      crw-rw----  1 root video  226,   1 Apr  1 12:28 card1
      crw-rw----  1 root render 226, 128 Apr  1 12:28 renderD128
      root@host:~# grep video /etc/group
      video:x:44:
      root@host:~# grep render /etc/group
      render:x:104:

      For Coral Edge TPU (skip this if doesn't have Coral Edge TPU)
      Bash:
      root@host:~# ls -al /dev/apex*
      crw-rw---- 1 root apex 120, 0 Apr  1 12:28 /dev/apex_0
      root@host:~# grep apex /etc/group
      apex:x:1000:root

    2. Add Device Passthrough via Proxmox GUI config

      Resources — Add — Device Passthrough (leave GID in CT as default. GID in CT to be configured later). Screenshot below already have gid configured (please ignore).
      • Device Path = /dev/dri/card1
      • Device Path = /dev/dri/renderD128
      • Device Path = /dev/dri/apex_0 # skip this if doesn't have Coral Edge TPU
        1749495311830.png
    3. Start Frigate CT and get device information (group id) from Frigate CT container
      Bash:
      root@frigate:~# ls -al /dev/dritotal 0
      drwxr-xr-x 2 root root       80 Apr  1 16:38 .
      drwxr-xr-x 7 root root      520 Apr  1 16:38 ..
      crw-rw---- 1 root root 226,   1 Apr  1 16:38 card1
      crw-rw---- 1 root root 226, 128 Apr  1 16:38 renderD128
      root@frigate:~# grep video /etc/group
      video:x:44:
      root@frigate:~# grep render /etc/group
      render:x:108:
      Groupid video is : 44, groupid render is : 108


      For Coral Edge TPU
      Bash:
      root@frigate:~# ls -al /dev/apex*
      crw-rw---- 1 root root 120, 0 Apr  1 16:38 /dev/apex_0
      root@frigate:~# grep apex /etc/group
      root@frigate:~#
      Groupid apex is not available yet.

    4. For Coral Edge TPU : Add apex group in Frigate CT because group apex is not available yet
      Bash:
      root@frigate:~# sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
      root@frigate:~# groupadd -g 1000 apex
      root@frigate:~# adduser $USER apex
      root@frigate:~# grep apex /etc/group
      apex:x:1000:root
      root@frigate:~# cd /etc/udev/rules.d
      root@frigate:/etc/udev/rules.d# cat 65-apex.rules
      SUBSYSTEM=="apex", MODE="0660", GROUP="apex"

    5. Shutdown Frigate CT
    6. Change GID for Device Passthrough via Proxmox GUI config
      Resources — Edit — Device(s)
      • Device Path = /dev/dri/card1 , GID in CT = 44
      • Device Path = /dev/dri/renderD128 , GIC in CT = 108
      • Device Path = /dev/dri/apex_0 , GID in CT = 1000
        1749495254673.png
    7. Start Frigate CT
    8. Check groupid of passthrough device in Frigate CT. Group has been changed accordingly (video, render, apex)
      Bash:
      root@frigate:~# ls -al /dev/dritotal 0
      drwxr-xr-x 2 root root         80 Apr  2 00:22 .
      drwxr-xr-x 7 root root        520 Apr  2 00:22 ..
      crw-rw---- 1 root video  226,   1 Apr  2 00:22 card1
      crw-rw---- 1 root render 226, 128 Apr  2 00:22 renderD128
      root@frigate:~# ls -al /dev/apex_0
      crw-rw---- 1 root apex 120, 0 Apr  2 00:22 /dev/apex_0
  4. Update CT container (and reboot when required)
    Bash:
    apt-get update
    apt-get upgrade
    shutdown -r now

Docker Installation on CT LXC

  1. Uninstall all conflicting packages:
    Bash:
    for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

  2. Setup Docker’s apt repository
    Bash:
    # Add Docker's official GPG key:
    apt-get install ca-certificates curl
    install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
    chmod a+r /etc/apt/keyrings/docker.asc
    
    # Add the repository to Apt sources:
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
      $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    apt-get update
    apt-get upgrade
  3. Install Docker packages
    Bash:
    apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

  4. Verify Docker installation by running hello-world image
    Bash:
    service docker start
    docker run hello-world

Post Installation Docker
  1. Setup to run Docker without root privileges
    Bash:
    groupadd docker
    sudo usermod -aG docker $USER
    newgrp docker # to activate without relogin
    docker run hello-world
    
    systemctl enable docker.service
    systemctl enable containerd.service

  2. Verify able to run docker command
    docker run hello-world

  3. Configure Docker to start on boot with systemd

  4. Bash:
    systemctl enable docker.service
    systemctl enable containerd.service

Frigate Installation​

  1. Preparation Frigate Docker Configuration
    Bash:
    mkdir -p /opt/frigate/config

  2. Prepare docker-compose.yml (/opt/frigate/docker-compose.yml)
    Change groupid accordingly in group_add section. In this guide, groupid are 44 (video), 104 (render), 1000 (apex)

    YAML:
    services:frigate:
    container_name: frigate
    privileged: true # this may not be necessary for all setups         # disable after configuration
    restart: unless-stopped
    image: ghcr.io/blakeblackshear/frigate:stable
    cap_add:
    - CAP_PERFMON
    group_add:
          - "44"
          - "104"
          - "1000"
        shm_size: "288mb" # update for your cameras based on calculation above
        volumes:
          - /etc/localtime:/etc/localtime:ro
          - /opt/frigate/config:/config
          - /mnt/frigate_storage:/media/frigate
          - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
            target: /tmp/cache
            tmpfs:
              size: 1000000000
        ports:
          - "8971:8971"
          - "8554:8554" # RTSP feeds
          # - "5000:5000" # Internal unauthenticated access. Expose carefully.
          # - "8555:8555/tcp" # WebRTC over tcp
          # - "8555:8555/udp" # WebRTC over udp
         devices:
          - /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
          - /dev/dri/card1:/dev/dri/card1
          - /dev/apex_0:/dev/apex_0 # For Coral Edge TPU, enable this. Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
          # - /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other version
          # - /dev/video11:/dev/video11 # For Raspberry Pi 4B
        environment:
          FRIGATE_RTSP_PASSWORD: "myrtsppassword"
          FRIGATE_ME_RTSP_USER: "myuser"
          FRIGATE_ME_RTSP_PASS: "mypass"

  3. Bring up frigate docker
    Bash:
    root@frigate:~# docker compose up -d

  4. Frigate is up and ready at port 8971. Browse https using ip_address & port_number to login. Get initial admin and password from frigate log.
    Bash:
    root@frigate:~# docker logs frigate

  5. Go To Menu Settings - System Metrics and Verify

Frigate Configuration via Frigate Browser - Setup Configuration Editor or directly editing /opt/frigate/config/config.yml
  1. Example Config for Adding Detector Coral Edge TPU
    YAML:
    ##### Detector tpu #####detectors:
      coral1:
        type: edgetpu
        device: pci

  2. Example Config for Adding Detector OpenVino
    YAML:
    ##### Detector OpenVino #####
    detectors:
      ov:
        type: openvino
        device: GPU
    
    model:
      width: 300
      height: 300
      input_tensor: nhwc
      input_pixel_format: bgr
      path: /openvino-model/ssdlite_mobilenet_v2.xml
      labelmap_path: /openvino-model/coco_91cl_bkgr.txt

  3. Example Config for Adding Detector CPU
    YAML:
    ##### Detector cpu #####detectors:
      cpu1:
        type: cpu
         num_threads: 3
         model:
           path: "/custom_model.tflite"
       cpu2:
         type: cpu
         num_threads: 3
Prerequisite Steps in Proxmox VE Host
  1. IMPORTANT : To disable Secure Boot in Proxmox Host BIOS

  2. Enable Device Passthrough
    Add option to /etc/default/grub
    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

    Run update-grub and reboot
    Bash:
    update-grub

    Install following packages:
    Bash:
    apt-get install intel-gpu-tools intel-media-va-driver i965-va-driver

    Add modules to /etc/modules:
    Code:
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd

    Enable modules
    Bash:
    update-initramfs -u -k all

    Reboot and verify
    Bash:
    cat /proc/cmdline
    lsmod | grep vfio
    dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

  3. Enable Intel GPU Monitoring in CT
    Update kernel parameter (kernel.perf_event_paranoid) to 0.

    Check current parameter
    Bash:
    root@host:~# sysctl kernel.perf_event_paranoidkernel.perf_event_paranoid = 4

    Temporary changes. Restart frigate to see the result via Menu Settings - System Metrics
    Bash:
    root@host:~# sh -c 'echo 0 > /proc/sys/kernel/perf_event_paranoid'

    Enable permanently during booting
    Bash:
    root@host:~# sh -c 'echo kernel.perf_event_paranoid=0 >> /etc/sysctl.d/local.conf'root@host:~# cat /etc/sysctl.d/local.conf
    kernel.perf_event_paranoid=0

    Reboot

  4. Enable Coral Edge TPU
    https://coral.ai/docs/m2/get-started/#2a-on-linux
    or could refer to following guide Installation Coral Dual Edge TPU runtime on Linux Proxmox VE
[Updated] Add Pre-requisite : Coral Edge TPU need to be properly installed on Linux Proxmox VE before able to passthrough to CT LXC. Separate reference guide added.
 

Attachments

  • 1749497105778.png
    1749497105778.png
    31.8 KB · Views: 1
Last edited:
Thank you I used this guide, but somehow didnt work. Want to contribute back the solution that got me working. I used Amazon Q CLI to fix and got it to provide a summary of all the steps:

Solving Coral TPU Access Issues in Nested Containers (Docker inside LXC)

The Problem

I recently encountered a frustrating issue with my Google Coral Edge TPU (PCI-e M.2) when trying to use it with Frigate NVR running in a Docker container, which itself was running inside an LXC container. While the TPU was properly detected on the host system, the containerized Frigate application couldn't access it due to permission issues.

The error in Frigate logs was consistently:
ValueError: Failed to load delegate from libedgetpu.so.1.0

And when testing direct device access:
Permission denied

Environment Details

  • Hardware: Google Coral Edge TPU (PCI-e M.2 form factor)
  • Host OS: Linux system running LXC containers
  • Container Setup: LXC container running Docker
  • Application: Frigate NVR (ghcr.io/blakeblackshear/frigate:stable)
  • Device Path: /dev/apex_0 with initial permissions crw-rw----

The Solution

With help from Amazon Q, I was able to diagnose and fix the issue through a multi-layered approach. Here's the complete solution for anyone facing similar issues:

1. Host-Level Fixes

First, create a script on the host to fix permissions and configure the LXC container:

#!/bin/bash

# Create apex group if it doesn't exist
getent group apex || groupadd apex

# Set proper permissions for the TPU device
chmod 666 /dev/apex_0
chown root:apex /dev/apex_0

# Add the device to the LXC container with proper permissions
# The 'dev' flag is crucial here!
sed -i 's|lxc.mount.entry = /dev/apex_0 dev/apex_0 none bind,optional,create=file.*|lxc.mount.entry = /dev/apex_0 dev/apex_0 none bind,optional,create=file,dev 0, 0|g' /var/lib/lxc/101/config

# Create udev rules for persistent permissions
cat > /etc/udev/rules.d/65-coral-tpu.rules << 'EOF'
# Google Coral Edge TPU rules
SUBSYSTEM=="apex", GROUP="apex", MODE="0666"
KERNEL=="apex_*", GROUP="apex", MODE="0666", TAG+="uaccess"
KERNEL=="gasket*", GROUP="apex", MODE="0666", TAG+="uaccess"
EOF

# Reload udev rules
udevadm control --reload-rules
udevadm trigger

2. LXC Container Configuration

Restart the LXC container after modifying its configuration:

lxc-stop -n 101
lxc-start -n 101

3. Inside the LXC Container

Create udev rules inside the LXC container:

mkdir -p /etc/udev/rules.d/
cat > /etc/udev/rules.d/65-coral-tpu.rules << 'EOF'
# Google Coral Edge TPU rules
SUBSYSTEM=="apex", GROUP="apex", MODE="0666"
KERNEL=="apex_*", GROUP="apex", MODE="0666", TAG+="uaccess"
KERNEL=="gasket*", GROUP="apex", MODE="0666", TAG+="uaccess"
EOF

4. Docker Run Script

Create a script to run Frigate with the correct parameters:

#!/bin/bash

# Stop and remove existing container
docker stop frigate 2>/dev/null
docker rm frigate 2>/dev/null

# Run Frigate with proper permissions
docker run -d --name frigate \
--privileged \
--network host \
--device /dev/apex_0:/dev/apex_0 \
--device-cgroup-rule="c 120:* rmw" \
-v /etc/localtime:/etc/localtime:ro \
-v /root/frigate:/config \
-v /mnt/cctv:/media/frigate \
-v /tmp/cache:/tmp/cache \
-e FRIGATE_RTSP_PASSWORD="password" \
-e LIBEDGETPU_LOGGING=1 \
-e LIBEDGETPU_VERBOSITY=10 \
-e CORAL_VISIBLE_DEVICES=0 \
ghcr.io/blakeblackshear/frigate:stable

echo "Frigate started with TPU access"

5. Systemd Service for Automatic Start

Create a systemd service inside the LXC container:

cat > /etc/systemd/system/frigate.service << 'EOF'
[Unit]
Description=Frigate with Coral TPU
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/root/run-frigate.sh
ExecStop=/usr/bin/docker stop frigate

[Install]
WantedBy=multi-user.target
EOF

# Enable the service
systemctl enable frigate.service

Key Insights

  1. Device Permissions: The TPU device needs very permissive permissions (666) to work properly in nested containers
  2. LXC Configuration: The dev flag in the LXC mount entry is crucial for allowing permission changes
  3. Docker Parameters: Using both --privileged and --device-cgroup-rule="c 120:* rmw" ensures proper device access
  4. Persistence: Udev rules and systemd services ensure the configuration survives reboots

Verification

After applying these fixes, Frigate logs should show:
[timestamp] frigate.detectors.plugins.edgetpu_tfl INFO : TPU found

Conclusion

This solution addresses the complex permission issues that arise when using hardware devices in nested container environments. The fix is permanent and will persist across reboots of both the host and containers.

Special thanks to Amazon Q for helping diagnose and solve this multi-layered container permission issue. The AI assistant was instrumental in identifying the root cause and developing a comprehensive solution that works across the host, LXC container, and Docker container layers.

Hope this helps others who encounter similar issues with Coral TPUs in complex container setups!
 
  • Like
Reactions: twoace88
thanks for the feedback.

the guide didn't include Installation of Coral Edge TPU in Proxmox VE Host. i have added separate reference guide as pre-requisite.
 
Thank you I used this guide, but somehow didnt work. Want to contribute back the solution that got me working. I used Amazon Q CLI to fix and got it to provide a summary of all the steps:

Solving Coral TPU Access Issues in Nested Containers (Docker inside LXC)

The Problem

I recently encountered a frustrating issue with my Google Coral Edge TPU (PCI-e M.2) when trying to use it with Frigate NVR running in a Docker container, which itself was running inside an LXC container. While the TPU was properly detected on the host system, the containerized Frigate application couldn't access it due to permission issues.

The error in Frigate logs was consistently:
ValueError: Failed to load delegate from libedgetpu.so.1.0

And when testing direct device access:
Permission denied

Environment Details

  • Hardware: Google Coral Edge TPU (PCI-e M.2 form factor)
  • Host OS: Linux system running LXC containers
  • Container Setup: LXC container running Docker
  • Application: Frigate NVR (ghcr.io/blakeblackshear/frigate:stable)
  • Device Path: /dev/apex_0 with initial permissions crw-rw----

The Solution

With help from Amazon Q, I was able to diagnose and fix the issue through a multi-layered approach. Here's the complete solution for anyone facing similar issues:

1. Host-Level Fixes

First, create a script on the host to fix permissions and configure the LXC container:

#!/bin/bash

# Create apex group if it doesn't exist
getent group apex || groupadd apex

# Set proper permissions for the TPU device
chmod 666 /dev/apex_0
chown root:apex /dev/apex_0

# Add the device to the LXC container with proper permissions
# The 'dev' flag is crucial here!
sed -i 's|lxc.mount.entry = /dev/apex_0 dev/apex_0 none bind,optional,create=file.*|lxc.mount.entry = /dev/apex_0 dev/apex_0 none bind,optional,create=file,dev 0, 0|g' /var/lib/lxc/101/config

# Create udev rules for persistent permissions
cat > /etc/udev/rules.d/65-coral-tpu.rules << 'EOF'
# Google Coral Edge TPU rules
SUBSYSTEM=="apex", GROUP="apex", MODE="0666"
KERNEL=="apex_*", GROUP="apex", MODE="0666", TAG+="uaccess"
KERNEL=="gasket*", GROUP="apex", MODE="0666", TAG+="uaccess"
EOF

# Reload udev rules
udevadm control --reload-rules
udevadm trigger

2. LXC Container Configuration

Restart the LXC container after modifying its configuration:

lxc-stop -n 101
lxc-start -n 101

3. Inside the LXC Container

Create udev rules inside the LXC container:

mkdir -p /etc/udev/rules.d/
cat > /etc/udev/rules.d/65-coral-tpu.rules << 'EOF'
# Google Coral Edge TPU rules
SUBSYSTEM=="apex", GROUP="apex", MODE="0666"
KERNEL=="apex_*", GROUP="apex", MODE="0666", TAG+="uaccess"
KERNEL=="gasket*", GROUP="apex", MODE="0666", TAG+="uaccess"
EOF

4. Docker Run Script

Create a script to run Frigate with the correct parameters:

#!/bin/bash

# Stop and remove existing container
docker stop frigate 2>/dev/null
docker rm frigate 2>/dev/null

# Run Frigate with proper permissions
docker run -d --name frigate \
--privileged \
--network host \
--device /dev/apex_0:/dev/apex_0 \
--device-cgroup-rule="c 120:* rmw" \
-v /etc/localtime:/etc/localtime:ro \
-v /root/frigate:/config \
-v /mnt/cctv:/media/frigate \
-v /tmp/cache:/tmp/cache \
-e FRIGATE_RTSP_PASSWORD="password" \
-e LIBEDGETPU_LOGGING=1 \
-e LIBEDGETPU_VERBOSITY=10 \
-e CORAL_VISIBLE_DEVICES=0 \
ghcr.io/blakeblackshear/frigate:stable

echo "Frigate started with TPU access"

5. Systemd Service for Automatic Start

Create a systemd service inside the LXC container:

cat > /etc/systemd/system/frigate.service << 'EOF'
[Unit]
Description=Frigate with Coral TPU
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/root/run-frigate.sh
ExecStop=/usr/bin/docker stop frigate

[Install]
WantedBy=multi-user.target
EOF

# Enable the service
systemctl enable frigate.service

Key Insights

  1. Device Permissions: The TPU device needs very permissive permissions (666) to work properly in nested containers
  2. LXC Configuration: The dev flag in the LXC mount entry is crucial for allowing permission changes
  3. Docker Parameters: Using both --privileged and --device-cgroup-rule="c 120:* rmw" ensures proper device access
  4. Persistence: Udev rules and systemd services ensure the configuration survives reboots

Verification

After applying these fixes, Frigate logs should show:
[timestamp] frigate.detectors.plugins.edgetpu_tfl INFO : TPU found

Conclusion

This solution addresses the complex permission issues that arise when using hardware devices in nested container environments. The fix is permanent and will persist across reboots of both the host and containers.

Special thanks to Amazon Q for helping diagnose and solve this multi-layered container permission issue. The AI assistant was instrumental in identifying the root cause and developing a comprehensive solution that works across the host, LXC container, and Docker container layers.

Hope this helps others who encounter similar issues with Coral TPUs in complex container setups!

Permission 660 should also work, instead of 666.
Glad it's working. Greetings!