Update Error with Coral TPU Drivers

Still not working for me. No error messages - just nothing in /dev/apex*

Note that I am not fluent with linux and I am also new to Proxmox, but from the PVE shell, if you run "lspci" do you see the coral edge tpu?

If so, if you run "lsmod | grep apex" what is the output?
 
  • Like
Reactions: bawjaws
Note that I am not fluent with linux and I am also new to Proxmox, but from the PVE shell, if you run "lspci" do you see the coral edge tpu?

If so, if you run "lsmod | grep apex" what is the output?
That makes two of us! I've been using them both for a couple of years and every time I think I'm just about starting to understand things, something like this comes along and puts me back in my place :)

LSPCI does indeed show the TPUs:

Code:
08:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
09:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU

lsmod | grep apex shows nothing. This is perhaps a good time to state an assumption that I've been making which (if incorrect) may be the root of my troubles. I've been assuming that as proxmox is passing through the PCIe Coral TPU devices that only Ubuntu needs to have the drivers loaded for them. Given that you're suggesting looking on the PVE shell I think that means I've been toiling on the wrong machine. Have I?

EDIT: I've gone back over the install notes for doing Frigate in a VM on Proxmox and they do say to just blacklist gasket and apex on Proxmox, pass them through to the Ubuntu VM and then install the drivers in Ubuntu, so I think I've done the right things. Although I have little confidence - I've obviosuly made a silly mistake somewhere :rolleyes:
 
Last edited:
That makes two of us! I've been using them both for a couple of years and every time I think I'm just about starting to understand things, something like this comes along and puts me back in my place :)

LSPCI does indeed show the TPUs:

Code:
08:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
09:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU

lsmod | grep apex shows nothing. This is perhaps a good time to state an assumption that I've been making which (if incorrect) may be the root of my troubles. I've been assuming that as proxmox is passing through the PCIe Coral TPU devices that only Ubuntu needs to have the drivers loaded for them. Given that you're suggesting looking on the PVE shell I think that means I've been toiling on the wrong machine. Have I?

EDIT: I've gone back over the install notes for doing Frigate in a VM on Proxmox and they do say to just blacklist gasket and apex on Proxmox, pass them through to the Ubuntu VM and then install the drivers in Ubuntu, so I think I've done the right things. Although I have little confidence - I've obviosuly made a silly mistake somewhere

The instructions I posted are for getting TPU passed into the LXC Container, not through to a VM. Both have different steps and methods.

I did not need to blacklist my TPUs (Not the USB or the PCI/M.2). The only thing I have blacklisted if the GPU which I pass into another LXC for running Ollama.

To install Frigate in an LXC, here are some instructions...

https://www.homeautomationguy.io/blog/running-frigate-on-proxmox

The above directions will work with LXC containers, not VMs.
 
Yes, Fixed the typo.

Went through the steps and it's working again. Thanks man..

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb

--

2024-04-25 19:34:27.653970380 [2024-04-25 19:34:25] frigate.detectors.plugins.edgetpu_tfl INFO : Attempting to load TPU as pci
2024-04-25 19:34:27.656362076 [2024-04-25 19:34:25] frigate.detectors.plugins.edgetpu_tfl INFO : Attempting to load TPU as usb
2024-04-25 19:34:27.656426461 [2024-04-25 19:34:25] frigate.detectors.plugins.edgetpu_tfl INFO : TPU found
2024-04-25 19:34:27.656493311 [2024-04-25 19:34:27] frigate.detectors.plugins.edgetpu_tfl INFO : TPU found
Is your lxc priveleged or unpriv? This didn't work for me on kernel 6.8.4
 
Is your lxc priveleged or unpriv? This didn't work for me on kernel 6.8.4
Those instructions don't have anything to do with the status of the LXC container, they are for running in the PVE host because the kernel module gasket-dkms that the corul tpu uses has conflicts with the kernel 6.8.4.
 
Is your lxc priveleged or unpriv? This didn't work for me on kernel 6.8.4
Did you follow the original instructions? If so, they are not working.. You need to use these instructions..

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb
 
Did you follow the original instructions? If so, they are not working.. You need to use these instructions..

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb
Yes I've followed the above multiple times but I still can't get it going.

In the lxc running below returns "/dev/apex_0" so which I think means the coral can be seen by the container?
Code:
ls /dev/apex_0

Possibly something wrong with my container .conf file?
 
Yes I've followed the above multiple times but I still can't get it going.

In the lxc running below returns "/dev/apex_0" so which I think means the coral can be seen by the container?
Code:
ls /dev/apex_0

Possibly something wrong with my container .conf file?

In your PVE shell, run "lsmod | grep apex"

You should see the apex and gasket.. i.e:

root@pve:~# lsmod | grep apex
apex 28672 5
gasket 126976 6 apex

------

Also, in your PVE shell, run "ls -l /dev/apex*"

This should output something like..:

crw-rw---- 1 root root 120, 0 Apr 26 21:11 /dev/apex_0

Note the 120, 0. That will be added to your container config.. i.e:

lxc.cgroup2.devices.allow: c 120:0 rwm - See below.

My container config <id>.conf is...

Code:
arch: amd64
cores: 8
features: nesting=1
hostname: docker-frigate
memory: 16384
mp0: /data/camera_media,mp=/camera_media
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:13:67:B6:23,ip=192.168.1.111/24,type=veth
onboot: 1
ostype: debian
rootfs: vm_data:subvol-102-disk-0,size=4G
swap: 512
tags:
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 120:0 rwm
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1       dev/ttyUSB1       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/002/ dev/bus/usb/002/ none bind,optional,create=dir 0,0
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
 
In your PVE shell, run "lsmod | grep apex"

You should see the apex and gasket.. i.e:

root@pve:~# lsmod | grep apex
apex 28672 5
gasket 126976 6 apex

------

Also, in your PVE shell, run "ls -l /dev/apex*"

This should output something like..:

crw-rw---- 1 root root 120, 0 Apr 26 21:11 /dev/apex_0

Note the 120, 0. That will be added to your container config.. i.e:

lxc.cgroup2.devices.allow: c 120:0 rwm - See below.

My container config <id>.conf is...

Code:
arch: amd64
cores: 8
features: nesting=1
hostname: docker-frigate
memory: 16384
mp0: /data/camera_media,mp=/camera_media
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:13:67:B6:23,ip=192.168.1.111/24,type=veth
onboot: 1
ostype: debian
rootfs: vm_data:subvol-102-disk-0,size=4G
swap: 512
tags:
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 120:0 rwm
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1       dev/ttyUSB1       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/bus/usb/002/ dev/bus/usb/002/ none bind,optional,create=dir 0,0
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file
Thank you for posting your container config. I believe I must've had something wrong in the lxc.mount.entry area that was stopping the tpu from getting recognized in frigate. Working now!
 
I would appreciate some help here.

My TPU was working fine before updating the kernel. When I updated the kernel I got the same compilation error that many faced with the driver, so I removed the driver, updated the repo and compiled / installed it again.

I can confirm I can see it loaded with:
root@pve:~# lsmod | grep apex
apex 28672 0
gasket 126976 1 apex

But I don't have anything one /dev/apex*:
root@pve:~# ls -l /dev/apex*
ls: cannot access '/dev/apex*': No such file or directory

Not sure if this is relevant, but seem on dmesg:
[ 10.234463] gasket: module verification failed: signature and/or required key missing - tainting kernel

Any ideas?
 
Last edited:
Which kernel version are you on? You can verify by running uname -r

Either way, try these instructions with updated repo from Post #69 above:

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb
 
Which kernel version are you on? You can verify by running uname -r

Either way, try these instructions with updated repo from Post #69 above:

apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb
Thanks for your reply.
Yeah, I tried these instructions before, this is how I got to the current point.

Kernel is 6.8.4-3-pve
 
It's possible that 6.8.4-3-pve is simply too new. Sorry, I won't be of much help here (I'm on 6.5.13-5)
 
yep down grade from 6.8.4-3-pve

proxmox-boot-tool kernel pin 6.5.13-5-pve
Code:
apt install proxmox-kernel-6.5.13-5-pve-signed
apt install proxmox-headers-6.5.13-5-pve

which downgrades the kernel

Code:
proxmox-boot-tool kernel pin 6.5.13-5-pve

which pins the 6.5 kernel for boot.

then reboot.

then retry installing gasket-dkms

if still not working, you might need to uninstall the old kernels & pve-headers and retry gasket-dkms installation
 
Just recieved the Coral today - spent about 4 hours on this so far following various info online and from this thread - i still cant get this working for the Coral USB variant.

pinned to "Linux 6.5.13-5-pve" & rebooted (based on the above post). confirmed correct verions with "uname -sr".
next, i ran through the post https://forum.proxmox.com/threads/update-error-with-coral-tpu-drivers.136888/post-658668. Rebooted again.
still cant get the coral out of Global Unichip identity. Coral plugged into a powered USB3 hub.

Code:
# lsusb
Bus 002 Device 004: ID 1a6e:089a Global Unichip Corp.
Bus 002 Device 003: ID 0bda:0411 Realtek Semiconductor Corp. Hub
Bus 002 Device 002: ID 0bda:0411 Realtek Semiconductor Corp. Hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 09eb:0131 IM Networks, Inc. USB
Bus 001 Device 003: ID 058f:6254 Alcor Micro Corp. USB Hub
Bus 001 Device 004: ID 0bda:5411 Realtek Semiconductor Corp. RTS5411 Hub
Bus 001 Device 002: ID 0bda:5411 Realtek Semiconductor Corp. RTS5411 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

just to be clear, this is being ran on the proxmox host - not even tried to get it passed through to a LXC yet.

anyone got some more sugestions?
 
Just recieved the Coral today - spent about 4 hours on this so far following various info online and from this thread - i still cant get this working for the Coral USB variant.

pinned to "Linux 6.5.13-5-pve" & rebooted (based on the above post). confirmed correct verions with "uname -sr".
next, i ran through the post https://forum.proxmox.com/threads/update-error-with-coral-tpu-drivers.136888/post-658668. Rebooted again.
still cant get the coral out of Global Unichip identity. Coral plugged into a powered USB3 hub.

Code:
# lsusb
Bus 002 Device 004: ID 1a6e:089a Global Unichip Corp.
Bus 002 Device 003: ID 0bda:0411 Realtek Semiconductor Corp. Hub
Bus 002 Device 002: ID 0bda:0411 Realtek Semiconductor Corp. Hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 09eb:0131 IM Networks, Inc. USB
Bus 001 Device 003: ID 058f:6254 Alcor Micro Corp. USB Hub
Bus 001 Device 004: ID 0bda:5411 Realtek Semiconductor Corp. RTS5411 Hub
Bus 001 Device 002: ID 0bda:5411 Realtek Semiconductor Corp. RTS5411 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

just to be clear, this is being ran on the proxmox host - not even tried to get it passed through to a LXC yet.

anyone got some more sugestions?
Did you follow the instructions on https://coral.ai/docs/accelerator/get-started/#1-install-the-edge-tpu-runtime and installed the drivers for the usb version? As far as I known the usb version don't have any problem originated from any kernel module. So it should work after intalling libedgetpu1-std.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!