Update Error with Coral TPU Drivers

Still not working for me. No error messages - just nothing in /dev/apex*

Note that I am not fluent with linux and I am also new to Proxmox, but from the PVE shell, if you run "lspci" do you see the coral edge tpu?

If so, if you run "lsmod | grep apex" what is the output?
 
  • Like
Reactions: bawjaws
Note that I am not fluent with linux and I am also new to Proxmox, but from the PVE shell, if you run "lspci" do you see the coral edge tpu?

If so, if you run "lsmod | grep apex" what is the output?
That makes two of us! I've been using them both for a couple of years and every time I think I'm just about starting to understand things, something like this comes along and puts me back in my place :)

LSPCI does indeed show the TPUs:

Code:
08:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
09:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU

lsmod | grep apex shows nothing. This is perhaps a good time to state an assumption that I've been making which (if incorrect) may be the root of my troubles. I've been assuming that as proxmox is passing through the PCIe Coral TPU devices that only Ubuntu needs to have the drivers loaded for them. Given that you're suggesting looking on the PVE shell I think that means I've been toiling on the wrong machine. Have I?

EDIT: I've gone back over the install notes for doing Frigate in a VM on Proxmox and they do say to just blacklist gasket and apex on Proxmox, pass them through to the Ubuntu VM and then install the drivers in Ubuntu, so I think I've done the right things. Although I have little confidence - I've obviosuly made a silly mistake somewhere :rolleyes:
 
Last edited:
That makes two of us! I've been using them both for a couple of years and every time I think I'm just about starting to understand things, something like this comes along and puts me back in my place :)

LSPCI does indeed show the TPUs:

Code:
08:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
09:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU

lsmod | grep apex shows nothing. This is perhaps a good time to state an assumption that I've been making which (if incorrect) may be the root of my troubles. I've been assuming that as proxmox is passing through the PCIe Coral TPU devices that only Ubuntu needs to have the drivers loaded for them. Given that you're suggesting looking on the PVE shell I think that means I've been toiling on the wrong machine. Have I?

EDIT: I've gone back over the install notes for doing Frigate in a VM on Proxmox and they do say to just blacklist gasket and apex on Proxmox, pass them through to the Ubuntu VM and then install the drivers in Ubuntu, so I think I've done the right things. Although I have little confidence - I've obviosuly made a silly mistake somewhere

The instructions I posted are for getting TPU passed into the LXC Container, not through to a VM. Both have different steps and methods.

I did not need to blacklist my TPUs (Not the USB or the PCI/M.2). The only thing I have blacklisted if the GPU which I pass into another LXC for running Ollama.

To install Frigate in an LXC, here are some instructions...

https://www.homeautomationguy.io/blog/running-frigate-on-proxmox

The above directions will work with LXC containers, not VMs.
 
Yes, Fixed the typo.

Went through the steps and it's working again. Thanks man..

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb

--

2024-04-25 19:34:27.653970380 [2024-04-25 19:34:25] frigate.detectors.plugins.edgetpu_tfl INFO : Attempting to load TPU as pci
2024-04-25 19:34:27.656362076 [2024-04-25 19:34:25] frigate.detectors.plugins.edgetpu_tfl INFO : Attempting to load TPU as usb
2024-04-25 19:34:27.656426461 [2024-04-25 19:34:25] frigate.detectors.plugins.edgetpu_tfl INFO : TPU found
2024-04-25 19:34:27.656493311 [2024-04-25 19:34:27] frigate.detectors.plugins.edgetpu_tfl INFO : TPU found
Is your lxc priveleged or unpriv? This didn't work for me on kernel 6.8.4
 
Is your lxc priveleged or unpriv? This didn't work for me on kernel 6.8.4
Those instructions don't have anything to do with the status of the LXC container, they are for running in the PVE host because the kernel module gasket-dkms that the corul tpu uses has conflicts with the kernel 6.8.4.
 
Is your lxc priveleged or unpriv? This didn't work for me on kernel 6.8.4
Did you follow the original instructions? If so, they are not working.. You need to use these instructions..

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb
 
Did you follow the original instructions? If so, they are not working.. You need to use these instructions..

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb
Yes I've followed the above multiple times but I still can't get it going.

In the lxc running below returns "/dev/apex_0" so which I think means the coral can be seen by the container?
Code:
ls /dev/apex_0

Possibly something wrong with my container .conf file?
 
Yes I've followed the above multiple times but I still can't get it going.

In the lxc running below returns "/dev/apex_0" so which I think means the coral can be seen by the container?
Code:
ls /dev/apex_0

Possibly something wrong with my container .conf file?

In your PVE shell, run "lsmod | grep apex"

You should see the apex and gasket.. i.e:

root@pve:~# lsmod | grep apex
apex 28672 5
gasket 126976 6 apex

------

Also, in your PVE shell, run "ls -l /dev/apex*"

This should output something like..:

crw-rw---- 1 root root 120, 0 Apr 26 21:11 /dev/apex_0

Note the 120, 0. That will be added to your container config.. i.e:

lxc.cgroup2.devices.allow: c 120:0 rwm - See below.

My container config <id>.conf is...

Code:
arch: amd64
cores: 8
features: nesting=1
hostname: docker-frigate
memory: 16384
mp0: /data/camera_media,mp=/camera_media
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:13:67:B6:23,ip=192.168.1.111/24,type=veth
onboot: 1
ostype: debian
rootfs: vm_data:subvol-102-disk-0,size=4G
swap: 512
tags:
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 120:0 rwm
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1       dev/ttyUSB1       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/002/ dev/bus/usb/002/ none bind,optional,create=dir 0,0
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
 
In your PVE shell, run "lsmod | grep apex"

You should see the apex and gasket.. i.e:

root@pve:~# lsmod | grep apex
apex 28672 5
gasket 126976 6 apex

------

Also, in your PVE shell, run "ls -l /dev/apex*"

This should output something like..:

crw-rw---- 1 root root 120, 0 Apr 26 21:11 /dev/apex_0

Note the 120, 0. That will be added to your container config.. i.e:

lxc.cgroup2.devices.allow: c 120:0 rwm - See below.

My container config <id>.conf is...

Code:
arch: amd64
cores: 8
features: nesting=1
hostname: docker-frigate
memory: 16384
mp0: /data/camera_media,mp=/camera_media
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:13:67:B6:23,ip=192.168.1.111/24,type=veth
onboot: 1
ostype: debian
rootfs: vm_data:subvol-102-disk-0,size=4G
swap: 512
tags:
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 120:0 rwm
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1       dev/ttyUSB1       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/002/ dev/bus/usb/002/ none bind,optional,create=dir 0,0
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
Thank you for posting your container config. I believe I must've had something wrong in the lxc.mount.entry area that was stopping the tpu from getting recognized in frigate. Working now!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!