M2 EdgeTPU (Coral AI) Problem

chapapa

New Member
Jan 15, 2022
5
0
1
37
Hello everyone,

today i received the m2 e-key coral ai (i wanted the usb one, but given the current circumstances....). i want to use it in a LXC Container with frigate for object detection. However i have a problem and i cant find the solution :(. Maybe someone here can help me out.

Proxmox: 7.1-8
Kernel: 5.13.19-2-pve

I can find the device
Code:
root@pve:~# lspci -nn | grep 089a
02:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]

but there is no /dev/apex_0

Code:
root@pve:~# ls -al /dev/apex*
ls: cannot access '/dev/apex*': No such file or directory

Code:
root@pve:~# ls /dev/apex_0
ls: cannot access '/dev/apex_0': No such file or directory


i tried to install it based on this site: https://www.coral.ai/docs/m2/get-started#2a-on-linux
Still the PCIe Driver is not loaded. Can anyone point me in the right direction?

Other Outputs:

lspci -vvv
Code:
02:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU (prog-if ff)
        Subsystem: Global Unichip Corp. Coral Edge TPU
        Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 60
        IOMMU group: 10
        Region 0: Memory at e0400000 (64-bit, prefetchable) [size=16K]
        Region 2: Memory at e0300000 (64-bit, prefetchable) [size=1M]
        Capabilities: [80] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 75.000W
                DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                        RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
                LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s (ok), Width x1 (ok)
                        TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt+ EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCap2: Supported Link Speeds: 2.5-5GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [d0] MSI-X: Enable- Count=128 Masked-
                Vector table: BAR=2 offset=00046800
                PBA: BAR=2 offset=00046068
        Capabilities: [e0] MSI: Enable- Count=1/32 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [f8] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D3 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
        Capabilities: [108 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [110 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                          PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                           T_CommonMode=0us LTR1.2_Threshold=0ns
                L1SubCtl2: T_PwrOn=10us
        Capabilities: [200 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 04000001 0000200f 02070000 c89a7ff4
        Kernel driver in use: vfio-pci

There should be a last line: Kernel modules: apex
i dont have it.

EDIT:

Some additional Infos.

Code:
root@pve:~# modprobe apex
modprobe: FATAL: Module apex not found in directory /lib/modules/5.13.19-2-pve

so it is not part of the kernel? is there a way to get it in there?
 
Last edited:
@chapapa

Can you confirm the steps you did?
I did install pve-headers and dkms, then proceeded to follow the Coral PCI Driver instructions but it still doesn't work for me, ls -al /dev/apex_0 finds nothing. lspci -nn | grep 089a returns the expected results.

Thanks,
 
@dougmaitelli , I remember that at first it didn't work. I think it had something to do with the dkms package being installed prior to updating the pve-headers. Try installing dkms with "- - reinstall" parameter..or Uninstaller completely and install again.
 
Hey @chapapa,

Yeah, I tried reinstalling it, but no luck. I can see that dkms is loading the module and everything seems to be right, except that /dev/apex_0 doesn't exist.

Thanks,
 
I am having the same issue, did you ever find a solution?
So, I managed to get this working today for an m.2 e-key coral on Proxmox 7.4-3.

I Installed pve-headers and then dkms packages on proxmox. Then followed the instructions for installing a coral on linux from their website, on proxmox itself (section 2a, instructions 1-6). The only difference was that in instruction 3 I created a user called lxcroot with value 100000 and used that in place of apex. Once I had rebooted I created an unprivileged container with debian 11, and added these lines to the config file;

lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0

I then booted the lxc and installed steps 2a(1,2,4,5 and 6) from the same install instructions that I had for the hypervisor in the lxc (I don't know if this is necessary or not - probably not, but it's what I did). Then I ran through the rest of the instructions and the test in 4 just worked.

Before I followed this order, I was throwing all sorts of errors when trying to run the test (no kernel module, reporting file errors, not being able to access the coral device etc.). After doing the above, I get an inference speed on the Macaw test of 2.6ms, in an unpriviledged container.
 
So, I managed to get this working today for an m.2 e-key coral on Proxmox 7.4-3.

I Installed pve-headers and then dkms packages on proxmox. Then followed the instructions for installing a coral on linux from their website, on proxmox itself (section 2a, instructions 1-6). The only difference was that in instruction 3 I created a user called lxcroot with value 100000 and used that in place of apex. Once I had rebooted I created an unprivileged container with debian 11, and added these lines to the config file;

lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0

I then booted the lxc and installed steps 2a(1,2,4,5 and 6) from the same install instructions that I had for the hypervisor in the lxc (I don't know if this is necessary or not - probably not, but it's what I did). Then I ran through the rest of the instructions and the test in 4 just worked.

Before I followed this order, I was throwing all sorts of errors when trying to run the test (no kernel module, reporting file errors, not being able to access the coral device etc.). After doing the above, I get an inference speed on the Macaw test of 2.6ms, in an unpriviledged container.
Next task is to not use lxcroot and do id mapping properly within the lxc, but that's for another day.
 
  • Like
Reactions: Tekno-man
Thanks for the update. I will try this out.
I've managed to get the Coral, and igpu both working properly in an unprivileged LXC with id mapping. This is what I did, in case it helps anyone else;

On the LXC from my previous post;

Check /etc/groups to get the group numbers for video, render, and apex groups. In my case these were video 44, render 105, and apex 1000

Add root user to groups video, render, and apex by running;

usermod --append --groups video,render,apex root

Shutdown the LXC


On Proxmox;

Check /dev/dri to get the video card device number, in my case this was 226.

Check /etc/groups to get the group numbers for video, render, and apex groups. In my case these were video 44, render 103, and apex 1000.

edit /etc/subgid to add the below lines to allow root to map these group ids;
root:44:1
root:103:1
root:1000:1

edit /etc/pve/lxc/<lxc id number>.conf to add lines for id mapping. My config is below with some notes;

arch: amd64
cores: 2
features: nesting=1
hostname: coraltest2
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=9E:AB:69:EF:09:2B,ip=dhcp,type=veth
ostype: debian
rootfs: local-zfs:subvol-99-disk-0,size=8G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:* rwm // use video card device number here
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,mode=0666 //mount Direct Rendering Infrastructure card0
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file //mount Direct Rendering Infrastructure Render128
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file // mount Coral EdgeTPU
lxc.idmap: u 0 100000 65536 // maps UIDs 0-65536 (LXC namespace) to 100000-165535 (host namespace)
lxc.idmap: g 0 100000 44 // maps GIDs 0-43 (LXC namespace) to 100000-100043 (host namespace)
lxc.idmap: g 44 44 1 // maps GID 44 (LXC namespace) to 44 (host namespace) for video group
lxc.idmap: g 45 100045 60 // maps GIDs 45-104 (LXC namespace) to 100045-100104 (host namespace)
lxc.idmap: g 105 103 1 // maps GID 105 (LXC namespace) to 103 (host namespace) for render group
lxc.idmap: g 106 100106 894 // maps GIDs 106-999 (LXC namespace) to 100106-100999 (host namespace)
lxc.idmap: g 1000 1000 1 // maps GID 1000 (LXC namespace) to 1000 (host namespace) for apex group
lxc.idmap: g 1001 101001 64535 // maps GIDs 1001-65536 (LXC namespace) to 101001-1065536 (host namespace)


Boot up the LXC and there you are - an unprivileged LXC with working Coral m.2 and 12th gen igpu.
 
Last edited:
@m
I've managed to get the Coral, and igpu both working properly in an unprivileged LXC with id mapping. This is what I did, in case it helps anyone else;

On the LXC from my previous post;

Check /etc/groups to get the group numbers for video, render, and apex groups. In my case these were video 44, render 105, and apex 1000

Add root user to groups video, render, and apex by running;

usermod --append --groups video,render,apex root

Shutdown the LXC


On Proxmox;

Check /dev/dri to get the video card device number, in my case this was 226.

Check /etc/groups to get the group numbers for video, render, and apex groups. In my case these were video 44, render 103, and apex 1000.

edit /etc/subgid to add the below lines to allow root to map these group ids;
root:44:1
root:103:1
root:1000:1

edit /etc/pve/lxc/<lxc id number>.conf to add lines for id mapping. My config is below with some notes;

arch: amd64
cores: 2
features: nesting=1
hostname: coraltest2
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=9E:AB:69:EF:09:2B,ip=dhcp,type=veth
ostype: debian
rootfs: local-zfs:subvol-99-disk-0,size=8G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:* rwm // use video card device number here
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,mode=0666 //mount Direct Rendering Infrastructure card0
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file //mount Direct Rendering Infrastructure Render128
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file // mount Coral EdgeTPU
lxc.idmap: u 0 100000 65536 // maps UIDs 0-65536 (LXC namespace) to 100000-165535 (host namespace)
lxc.idmap: g 0 100000 44 // maps GIDs 0-43 (LXC namespace) to 100000-100043 (host namespace)
lxc.idmap: g 44 44 1 // maps GID 44 (LXC namespace) to 44 (host namespace) for video group
lxc.idmap: g 45 100045 60 // maps GIDs 45-104 (LXC namespace) to 100045-100104 (host namespace)
lxc.idmap: g 105 103 1 // maps GID 105 (LXC namespace) to 103 (host namespace) for render group
lxc.idmap: g 106 100106 894 // maps GIDs 106-999 (LXC namespace) to 100106-100999 (host namespace)
lxc.idmap: g 1000 1000 1 // maps GID 1000 (LXC namespace) to 1000 (host namespace) for apex group
lxc.idmap: g 1001 101001 64535 // maps GIDs 1001-65536 (LXC namespace) to 101001-1065536 (host namespace)


Boot up the LXC and there you are - an unprivileged LXC with working Coral m.2 and 12th gen igpu.
hello sir, how did you create this part? "created a user called lxcroot with value 100000 "
 
just a note: I was able to enable PCIe TPU via VM by enabeling the IOMMU in GRUB in pve shell

Code:
nano /etc/default/grub
for intel:
add
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
save
Code:
update-grub
nano /etc/modules
ad modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

reboot

in VM
install coral drivers 2a ( 1-6 )
the add user to plugdev
Code:
sudo usermod -aG plugdev $USER

I was unable to figure out how to enable coral pci in LXC, all looked good , drivers and all but for some reason Frigate would be failing for NO TPU FOUND. VM worked for me.
 
I'm hitting a wall on this one.

I've tried this on 5.15, 5.19, and 6.2. I've install pve-headers and dkms with no issue.

When I try to install gasket-dkms I get this error:

Code:
Building for 6.2.11-2-pve
Module build for kernel 6.2.11-2-pve was skipped since the
kernel headers for this kernel does not seem to be installed.

Does the pve-headers package cover the kernel headers referred to in the error?

Edit: to anyone looking, you have to install specific pve-headers:

apt install pve-headers-$(uname -r)
 
Last edited:
I've managed to get the Coral, and igpu both working properly in an unprivileged LXC with id mapping. This is what I did, in case it helps anyone else;

On the LXC from my previous post;

Check /etc/groups to get the group numbers for video, render, and apex groups. In my case these were video 44, render 105, and apex 1000

Add root user to groups video, render, and apex by running;

usermod --append --groups video,render,apex root

Shutdown the LXC


On Proxmox;

Check /dev/dri to get the video card device number, in my case this was 226.

Check /etc/groups to get the group numbers for video, render, and apex groups. In my case these were video 44, render 103, and apex 1000.

edit /etc/subgid to add the below lines to allow root to map these group ids;
root:44:1
root:103:1
root:1000:1

edit /etc/pve/lxc/<lxc id number>.conf to add lines for id mapping. My config is below with some notes;

arch: amd64
cores: 2
features: nesting=1
hostname: coraltest2
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=9E:AB:69:EF:09:2B,ip=dhcp,type=veth
ostype: debian
rootfs: local-zfs:subvol-99-disk-0,size=8G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:* rwm // use video card device number here
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,mode=0666 //mount Direct Rendering Infrastructure card0
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file //mount Direct Rendering Infrastructure Render128
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file // mount Coral EdgeTPU
lxc.idmap: u 0 100000 65536 // maps UIDs 0-65536 (LXC namespace) to 100000-165535 (host namespace)
lxc.idmap: g 0 100000 44 // maps GIDs 0-43 (LXC namespace) to 100000-100043 (host namespace)
lxc.idmap: g 44 44 1 // maps GID 44 (LXC namespace) to 44 (host namespace) for video group
lxc.idmap: g 45 100045 60 // maps GIDs 45-104 (LXC namespace) to 100045-100104 (host namespace)
lxc.idmap: g 105 103 1 // maps GID 105 (LXC namespace) to 103 (host namespace) for render group
lxc.idmap: g 106 100106 894 // maps GIDs 106-999 (LXC namespace) to 100106-100999 (host namespace)
lxc.idmap: g 1000 1000 1 // maps GID 1000 (LXC namespace) to 1000 (host namespace) for apex group
lxc.idmap: g 1001 101001 64535 // maps GIDs 1001-65536 (LXC namespace) to 101001-1065536 (host namespace)


Boot up the LXC and there you are - an unprivileged LXC with working Coral m.2 and 12th gen igpu.
I followed your steps and installed coral in the host and the lxc, but used apex in the host instead of lxcroot. can you help me out with what i need to put into the config file?


Thanks in advance!
 
Last edited:
Ok, I've delved into the idmapping and setup a config file for my situation. Strangely enough, i keep getting an error.

Pve host:
Video = 44
Render = 104
apex = 1000

Lxc:
Video =44
Render =108
apex = 1000

In the host : etc/subgid i've added :
root:44:1
root:104:1
root:1000:1

This is my Lxc conf file:

lxc.cgroup2.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,mode=0666
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 63
lxc.idmap: g 108 104 1
lxc.idmap: g 109 100109 891
lxc.idmap: g 1000 1000 1
lxc.idmap: g 1001 101001 64535

If i only use the text till lxc.idmap: g 45 100045 63 everything loads, afterwards i get an error.

lxc_map_ids: 3701 newgidmap failed to write mapping "newgidmap: gid range [108-109) -> [104-105) not allowed": newgidmap 19090 0 100000 44 44 44 1 45 100045 63 108 104 1 109 100109 891 1000 1000 1 1001 101001 64535
lxc_spawn: 1788 Failed to set up id mapping.

Strange because i do think i mapped everything correctly?

Some help would be appreciated!
 
Ok, I've delved into the idmapping and setup a config file for my situation. Strangely enough, i keep getting an error.

Pve host:
Video = 44
Render = 104
apex = 1000

Lxc:
Video =44
Render =108
apex = 1000

In the host : etc/subgid i've added :
root:44:1
root:104:1
root:1000:1

This is my Lxc conf file:

lxc.cgroup2.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,mode=0666
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 63
lxc.idmap: g 108 104 1
lxc.idmap: g 109 100109 891
lxc.idmap: g 1000 1000 1
lxc.idmap: g 1001 101001 64535

If i only use the text till lxc.idmap: g 45 100045 63 everything loads, afterwards i get an error.

lxc_map_ids: 3701 newgidmap failed to write mapping "newgidmap: gid range [108-109) -> [104-105) not allowed": newgidmap 19090 0 100000 44 44 44 1 45 100045 63 108 104 1 109 100109 891 1000 1000 1 1001 101001 64535
lxc_spawn: 1788 Failed to set up id mapping.

Strange because i do think i mapped everything correctly?

Some help would be appreciated!
I'd agree that your mapping looks right. Did you add the root user to the required groups in the LXC? (usermod --append --groups video,render,apex root), and you've installed the coral device on proxmox already and tested it works there first (installed pve-headers, dkms packages, followed the instructions to install coral device on debian etc. within proxmox?). Can you check that you've got the mappign numbers the right way around (render is 104 on pve and 108 on lxc for you?) It's saying it doesn't like the range 108-109, although your mapping should only be using 108-108 -> 104, not 108-109 -> 104-105, so that's a bit odd. Only other thing I can think of - have you enabled nesting and keyctl in the lxc options and made sure it's an unpriveledged container?

On a seperate note, someone asked me how to find the the video card number in /dev/dri/ as they only saw card0 and RenderD128. If you cd to /dev/dri/ and then do ls -l you'll see the video card number after the user and group ownership.
 
Last edited:
I'd agree that your mapping looks right. Did you add the root user to the required groups in the LXC? (usermod --append --groups video,render,apex root), and you've installed the coral device on proxmox already and tested it works there first (installed pve-headers, dkms packages, followed the instructions to install coral device on debian etc. within proxmox?). Can you check that you've got the mappign numbers the right way around (render is 104 on pve and 108 on lxc for you?) It's saying it doesn't like the range 108-109, although your mapping should only be using 108-108 -> 104, not 108-109 -> 104-105, so that's a bit odd. Only other thing I can think of - have you enabled nesting and keyctl in the lxc options and made sure it's an unpriveledged container?

On a seperate note, someone asked me how to find the the video card number in /dev/dri/ as they only saw card0 and RenderD128. If you cd to /dev/dri/ and then do ls -l you'll see the video card number after the user and group ownership.
Thanks for the reply!

Checked everything multiple times, only thing that was not enabled was keyctl.
So i actived it in the lxc and tried to spin up the container, but still the same error.
Ps: this is on proxmox 8.02

I did found a work around for the mappings when searching the net, maybe it can help someone else:

lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,mode=0666 lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file,uid=0,gid=108 lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,uid=0,gid=44 lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file,uid=0,gid=1000 lxc.hook.pre-start: sh -c "chown 0:100108 /dev/dri/renderD128" lxc.hook.pre-start: sh -c "chown 0:100044 /dev/dri/card0" lxc.hook.pre-start: sh -c "chown 0:101000 /dev/apex_0"
 
Thanks for the reply!

Checked everything multiple times, only thing that was not enabled was keyctl.
So i actived it in the lxc and tried to spin up the container, but still the same error.
Ps: this is on proxmox 8.02

I did found a work around for the mappings when searching the net, maybe it can help someone else:

lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,mode=0666 lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file,uid=0,gid=108 lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,uid=0,gid=44 lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file,uid=0,gid=1000 lxc.hook.pre-start: sh -c "chown 0:100108 /dev/dri/renderD128" lxc.hook.pre-start: sh -c "chown 0:100044 /dev/dri/card0" lxc.hook.pre-start: sh -c "chown 0:101000 /dev/apex_0"
As an FYI, I've just upgraded to PVE 8.1 and it broke the corel install on teh hypervisor. Apparently there are no drivers for kernel 6.2 or 6.5 yet, so I'd hold off on updating for now if I were you.

Edit: Check out this post https://forum.proxmox.com/threads/update-error-with-coral-tpu-drivers.136888/post-608975

This solution worked for me to get the corel working again under PVE8.1 and kernel 6.5. No need to make any changes to the LXC, just the PVE host.
 
Last edited:
As an FYI, I've just upgraded to PVE 8.1 and it broke the corel install on teh hypervisor. Apparently there are no drivers for kernel 6.2 or 6.5 yet, so I'd hold off on updating for now if I were you.

Edit: Check out this post https://forum.proxmox.com/threads/update-error-with-coral-tpu-drivers.136888/post-608975

This solution worked for me to get the corel working again under PVE8.1 and kernel 6.5. No need to make any changes to the LXC, just the PVE host.

Hi moocowmatt, does your configuration work with Frigate in LXC?
and another question, did you install the gasket-dkms and libedgetpu1-std packages also in LXC container?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!