Update Error with Coral TPU Drivers

yes, the official guide was what i followed first - figuring it'd be the most accurate.

plus side, i've confirmed the Coral is working on a windows pc - it's able to run the parrot test file. just need to get it working on prox/linux


edit: well, i'm feeling stupid. i read bad info on the interwebs to say if the Coral is installed correctly, it should remove the 'Global Unichip Corp' USB ID, and turn into a 'Google ...' branded one. if it doesnt, that would mean something isnt right. Turns out that isnt the case and 'Global Unichip Corp' for the USB model seems to be ok.
i also assumed the lack of able to run the test files, implied i had something wrong.


i'm jumping the gun as i havent got any further yet, but Frigate is reporting a 10ms inference on the Coral object - looking good!

tl;dr: move along, nothing to see here!
 
Last edited:
Running Proxmox 8.2.2 here
I followed the instructions from this thread and reverted to kernel 6.5.13-5-pve but still a no show. Any idea what I've done wrong ?

Code:
uname -r
6.5.13-5-pve

Steps I've done :

Code:
apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb


drivers seems to be installed

Code:
~# dkms status
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/gasket/1.0/source/dkms.conf)
Deprecated feature: REMAKE_INITRD (/var/lib/dkms/gasket/1.0/source/dkms.conf)
gasket/1.0, 6.5.13-5-pve, x86_64: installed
gasket/1.0, 6.8.4-3-pve, x86_64: installed

Code:
dmesg |grep dkms
:~#

On the host:
ls /dev/apex = nothing

Code:
:~# lspci
00:00.0 Host bridge: Intel Corporation Device a706
00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04)
00:06.0 PCI bridge: Intel Corporation Raptor Lake PCIe 4.0 Graphics Port
00:06.2 PCI bridge: Intel Corporation Device a73d
00:07.0 PCI bridge: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port
00:07.2 PCI bridge: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port
00:0d.0 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 USB Controller
00:0d.2 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI
00:0d.3 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI
00:14.0 USB controller: Intel Corporation Alder Lake PCH USB 3.2 xHCI Host Controller (rev 01)
00:14.2 RAM memory: Intel Corporation Alder Lake PCH Shared SRAM (rev 01)
00:16.0 Communication controller: Intel Corporation Alder Lake PCH HECI Controller (rev 01)
00:16.3 Serial controller: Intel Corporation Alder Lake AMT SOL Redirection (rev 01)
00:1c.0 PCI bridge: Intel Corporation Alder Lake-P PCH PCIe Root Port (rev 01)
00:1c.4 PCI bridge: Intel Corporation Device 51bc (rev 01)
00:1d.0 PCI bridge: Intel Corporation Alder Lake PCI Express Root Port (rev 01)
00:1d.2 PCI bridge: Intel Corporation Device 51b2 (rev 01)
00:1d.3 PCI bridge: Intel Corporation Device 51b3 (rev 01)
00:1f.0 ISA bridge: Intel Corporation Raptor Lake LPC/eSPI Controller (rev 01)
00:1f.4 SMBus: Intel Corporation Alder Lake PCH-P SMBus Host Controller (rev 01)
00:1f.5 Serial bus controller: Intel Corporation Alder Lake-P PCH SPI Controller (rev 01)
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
02:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
57:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
58:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
59:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
5a:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-LM (rev 04)
5b:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
 
looks like you're using the kyle-gospo version of gasket-dkms which i believe is for 6.8 and above
on 6.5.13-5 try following steps here
the normal google version is what i am using & it's working fine albeit not surviving a reboot so i just have to run the code again

i'm on 6.5.13-5-pve & it's working okay
 
looks like you're using the kyle-gospo version of gasket-dkms which i believe is for 6.8 and above
on 6.5.13-5 try following steps here
the normal google version is what i am using & it's working fine albeit not surviving a reboot so i just have to run the code again

i'm on 6.5.13-5-pve & it's working okay
I tried both with KyleGospo on 6.8.4.3 and the google version on 6.5.13-5 with the same result (apex device not showing up but no visible error message).
Is it possible this is related to using dkms and secureboot ?
I'm definitely not familiar with secureboot and this is the first proxmox installation I've done with uefi / secure boot so I tought this could be related. What do you think ?
https://pve.proxmox.com/wiki/Secure_Boot_Setup
 
I tried both with KyleGospo on 6.8.4.3 and the google version on 6.5.13-5 with the same result (apex device not showing up but no visible error message).
Is it possible this is related to using dkms and secureboot ?
I'm definitely not familiar with secureboot and this is the first proxmox installation I've done with uefi / secure boot so I tought this could be related. What do you think ?
https://pve.proxmox.com/wiki/Secure_Boot_Setup
Sometimes it name it apex plus something else, so try searching it manually:
Bash:
ls /dev/
If you think the secureboot module it's the issue, you can always try by disabling it. But if you aren't receiving any error mensage when you boot the machine from secure boot module is probably not the culprit.
 
Sometimes it name it apex plus something else, so try searching it manually:
Bash:
ls /dev/
If you think the secureboot module it's the issue, you can always try by disabling it. But if you aren't receiving any error mensage when you boot the machine from secure boot module is probably not the culprit.
I have nothing under /dev related to apex.
No error messages at boot, but :

Code:
:~# modprobe gasket
modprobe: ERROR: could not insert 'gasket': Key was rejected by service
 
I have nothing under /dev related to apex.
No error messages at boot, but :

Code:
:~# modprobe gasket
modprobe: ERROR: could not insert 'gasket': Key was rejected by service
well that error means that the gasket module is not signed or is signed with an untrusted key, so Secure Boot is rejecting it. Try disbabling Secure boot in the bios, then uinstall and reinstall the gasket-dkms if is necessary.

You can also skip that step and try directly to sign the module with a trusted key or add the current key to the trusted keys.
 
Last edited:
Finally got it working.
So the issue was to sign the the dkms module. If you're on the same boat (secure boot) and want to stay that way:
* Validate that you use secure boot:
Code:
mokutil --sb-state
Code:
SecureBoot enabled

as root :

1) generate cert:

Code:
openssl req -new -x509 -nodes -days 36500 -subj "/CN=DKMS modules" \
-newkey rsa:2048 -keyout /etc/dkms/dkms.key \
-outform DER -out /etc/dkms/dkms.der

2) import the cert
Code:
mokutil --import /etc/dkms/dkms.der

You will have to enter a password twice

3) reboot and stay at the local console, you will have a few sec to validate the cert import in shim

Select "enroll MOK"
Select the key you just imported
Type the password
Import
reboot

4) edit dkms framework :
Code:
nano /etc/dkms/framework.conf

add the following:

Code:
sign_tool="/etc/dkms/sign_helper.sh"

* If sign_helper.sh is missing
Code:
nano /etc/dkms/sign_helper.sh

add the following:

Code:
#! /bin/bash
/lib/modules/"$1"/build/scripts/sign-file sha512 /root/.mok/client.priv /root/.mok/client.der "$2"

Make it executable:
Code:
chmod +x /etc/dkms/sign_helper.sh

5) edit a framework config (file might not exist already)
Code:
nano /etc/dkms/framework.conf.d/01-custom.conf

add the following:

Code:
mok_signing_key="/etc/dkms/dkms.key"
mok_certificate="/etc/dkms/dkms.der"

6) follow the previous install steps:

Code:
apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb

7) reboot

8) validate that apex_0 is present and module loaded :
Code:
lsmod |grep gasket

Should return:

Code:
gasket                126976  1 apex

Code:
ls /dev/apex*

Should return:

Code:
/dev/apex_0
 
Last edited:
  • Like
Reactions: Lip90
Did want to chime in and mention the KyleGospo/gasket-dkms repo has no mention of "USB" in the source files which I could find, there are however references to "PCI." The only references to "USB" which could be interpreted as a driver I found relating to coral were in the source code to libedgetpu1-std and libedgetpu1-max. The Apex driver appears to be PCIe only and will not help anyone with a USB module.

I spent a good 20 hours trying to get the USB module to work but it never showed up in /dev/. I returned it and got the M.2 version which works fine with the instructions in this thread.
 
Last edited:
For the usb coral, you don't need the coral driver on the host in order to pass it to an lxc. I have it working in a Frigate lxc, without installing anything on the host.
 
For the usb coral, you don't need the coral driver on the host in order to pass it to an lxc. I have it working in a Frigate lxc, without installing anything on the host.
Via USB passthrough? I was under the impression that would work for a full VM but LXC runs the same kernel as the host which is potentially problematic. Are you running the current kernel in your setup?
 
Hi when I run the the trexx helper script it doesn't ask me about my coral (dual TPU pcie) device during the installation process. How do I setup?

I have the latest version of proxmox. I can see both tpu edge in the datacenter resource mapping so the drivers should be installed correctly?
 
I assume that you have the last version of the kernel, if not in the previous posts there are also the instructions of how to identify your kernel version and which github repo you need to use instead of KyleGospo:
Bash:
apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb

Follow the previous instructions in the pve host to enable the module gasket-dkms that the pcie coral version needs to work. Don't know exactly how the trexx helper script passthrough the pci device or if it passthrough it at all, so maybe you will have to passthrough it yourself and install the drivers inside the lxc container for it to work correctly.
 
I assume that you have the last version of the kernel, if not in the previous posts there are also the instructions of how to identify your kernel version and which github repo you need to use instead of KyleGospo:
Bash:
apt remove gasket-dkms
apt install dkms lsb-release sudo git dh-dkms devscripts pve-headers
git clone https://github.com/KyleGospo/gasket-dkms
cd gasket-dkms
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb

Follow the previous instructions in the pve host to enable the module gasket-dkms that the pcie coral version needs to work. Don't know exactly how the trexx helper script passthrough the pci device or if it passthrough it at all, so maybe you will have to passthrough it yourself and install the drivers inside the lxc container for it to work correctly.

Hello - I have kernel 6.8.8-1-pve.

When I run the remove gasket-dkms i get an error saying that I am unable to locate the package.

After I installed proxmox i used the helper script for post-installation cleanup, I believe it added the non-subscritption repos.
 
Sorry about that, that are the instrucctions for someone that have previously installed the gasket-dkms with the previous version of the kernel. Just skip the line of apt remove gasket-dkms
 
Via USB passthrough? I was under the impression that would work for a full VM but LXC runs the same kernel as the host which is potentially problematic. Are you running the current kernel in your setup?
The drivers are installed in the container. You just don't need them in the host, because the usb device is shown without them in the host and it can be passed through.

For pcie m.2 version, you need the drivers in the host in order to pass through it, afaik.
 
  • Like
Reactions: foxale08
Trying to fix my kernal warning similar to OP.

For anyone who is noob like like me.
I got the error after git clone. Is it simple a directory structureand needed to delte the esisting gasket-driver directory for the clone path.
Code:
root@pve:/home# git clone https://github.com/google/gasket-driver.git
fatal: destination path 'gasket-driver' already exists and is not an empty directory.

after cd /home.
I did rm -r gasket-driver.

After deleting the gasket-driver and completing the steps. I saw no errors. Rebooted my proxmox LXC and frigate is not working. PANIC.

Restarted the LXC again and this time worked.

Thanks for the solution.
 
Last edited:
yeah

apt remove gasket-dkms
apt install git
apt install devscripts
apt install dh-dkms

In HOME

git clone https://github.com/google/gasket-driver.git
cd gasket-driver/
debuild -us -uc -tc -b
cd..
dpkg -i gasket-dkms_1.0-18_all.deb

apt update && apt upgrade

if no error => reboot !
i know this is an older post but i cant get it to work for the life of me. i followed these instructions step by step and was able to apt update/upgrade without any errors but after rebooting i still get a no such file or directory when running "ls /dev/apex_0" i've tried following pretty much every thread i've found about it and nothing seems to work. I'm getting real close to returning the m2 coral and just sticking with the usb. I think my machine is getting stuck using the vfio driver and not the apex but I'm not positive and have no idea how to change that setting
 
Hi all, I had this working stable on 6.5 kernel and then recently updated to 6.8. Coral is working, but it created a really odd and nasty side effect. I've had to disable the coral for now:

I'm using a dual m2 coral with PCI passed through to a windows VM. When the host and VM cold start for the first time, everything is fine.
HOWEVER, if I shut down the windows VM and start it up again, CPU on the host goes to 100% and it crashes the host.

I can replicate this 100% of the time. I'm fairly certain it's the coral causing this situation because if I disable the passthrough, the problem goes away.

Wondering if anyone can reproduce this?

Thanks!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!