How to install Coral M.2 PCI passthrough for Frigate on Proxmox 8+

vendo232

Member
Dec 16, 2021
14
4
8
47
Code:
apt install pve-headers


apt-get install --reinstall gasket-dkms ( this does not work in latest PROXMOX 8 )


GASKET-DKMS installation in Proxmox 8


apt remove gasket-dkms
apt install git
apt install devscripts
apt install dh-dkms


In HOME


git clone https://github.com/google/gasket-driver.git
cd gasket-driver/
debuild -us -uc -tc -b
cd..
dpkg -i gasket-dkms_1.0-18_all.deb


apt update && apt upgrade

reboot


Coral Drivers


echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list


curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -


apt-get update


apt-get install gasket-dkms libedgetpu1-std


sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"


groupadd apex


adduser $USER apex




lspci -nn | grep 089a


    03:00.0 System peripheral: Device 1ac1:089a




ls /dev/apex_0
    /dev/apex_0


nano /etc/default/grub


GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"


update-grub              -     to finalize changes


nano /etc/modules


vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd


reboot

after reboot add Coral as PCI Hardware to your VM.

1715374408779.png

1715374465121.png

UPDATE: Do not update kernel. upgrading kernel will cause issues and PROXMOX will not boot.

If you upgrade Kernel then follow this guide to revert to the Kernel you used when you installed this guide.

https://engineerworkshop.com/blog/how-to-revert-a-proxmox-kernel-update/
 
Last edited:
Hi. I seem to be having problems with my Coral TPU not being detected in my Ubuntu VM. It was working but seems to have broken after an update.

In your script above - do all those commands need to be done in Proxmox, or are some of them in the VM?
 
Hi. I seem to be having problems with my Coral TPU not being detected in my Ubuntu VM. It was working but seems to have broken after an update.

In your script above - do all those commands need to be done in Proxmox, or are some of them in the VM?
you need to install GOOGLE driver in VM as well
 
Could you please share the details of the VM you are passing it through?
I have tried passing my Coral TPU PCIe to a Debian 12 VM (without installing the drivers on the Proxmox host though) without any luck. It resulted in the VM not booting and the Proxmox host reporting the following errors in the log:

Code:
Jun 21 18:13:50 pve kernel: vfio-pci 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
Jun 21 18:13:49 pve kernel: vfio-pci 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
Jun 21 18:13:49 pve kernel: vfio-pci 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
Jun 21 18:13:49 pve kernel: vfio-pci 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible
 
Another hint about installing a Google Coral TPU M.2 in a Docker container running on a Debian Proxmox VM (and probably similar for an LXC container too) to run Frigate object detection. Maybe this helps, I typed them up as notes-to-self, and I am far from an expert so please correct me as necessary. This whole saga started as the USB Coral gets stuck and there seems to be no solution for this.

The Proxmox host used is a fully updated 8.2.4 installation running on an Intel NUC NUC6i5SYH. I have been following instructions above and here, which are versions of the Google docs here.

The key points to remember are -
  • Configure the Proxmox host by installing the drivers & setting permissions as described above. As below, my Proxmox host shows no /dev/apex_0 device and this does not seem to matter.
  • Pass through the PCI device from the host to the VM.
  • Configure the VM if you are using one to host the Docker container by following these instructions to install the drivers & set permissions again
  • LXC containers are different here but still pay attention to passing through the right devices and setting permissions
  • If using a VM you must pass-through the /dev/apex_0 into the Docker container. This was the final sprinkle that got everything working for me.
Check the TPU is visible in the Proxmox host -
Code:
root@pmox02:/home#
root@pmox02:/home# lspci -vv |grep Coral
01:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU (prog-if ff)
    Subsystem: Global Unichip Corp. Coral Edge TPU
root@pmox02:/home# lspci -nn | grep 089a
01:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]

root@pmox02:/home# dmesg |grep apex
[   66.498629] apex 0000:01:00.0: enabling device (0000 -> 0002)
[   71.778691] apex 0000:01:00.0: Apex performance not throttled due to temperature

root@pmox02:/home# lsmod |grep apex
apex                   28672  0
gasket                126976  1 apex

root@pmox02:/home# ls /dev/apex*
ls: cannot access '/dev/apex*': No such file or directory

The TPU is visible in the hardware configuration for the Debian VM and can be passed through
View attachment 71211

In the Debian VM things look good -
Code:
root@frigate01:/home# lsb_release -a
No LSB modules are available.
Distributor ID:    Debian
Description:    Debian GNU/Linux 12 (bookworm)
Release:    12
Codename:    bookworm
root@frigate01:/home# uname -r
6.1.0-22-amd64
root@frigate01:/home# lspci -vv |grep Coral
00:10.0 System peripheral: Global Unichip Corp. Coral Edge TPU (prog-if ff)
    Subsystem: Global Unichip Corp. Coral Edge TPU
root@frigate01:/home# ls /dev/apex*
/dev/apex_0
root@frigate01:/home# lspci -nn | grep 089a
00:10.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]

And docker-compose.yaml -
Code:
services:
  frigate:
    container_name: frigate
    restart: unless-stopped
    image: ghcr.io/blakeblackshear/frigate:stable
    volumes:
      - ./config:/config
      - ./storage:/media/frigate
      - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
        target: /tmp/cache
        tmpfs:
          size: 1000000000
    ports:
      - "5000:5000"
      - "8554:8554" # RTSP feeds
    environment:
      LIBVA_DRIVER_NAME: "i965"
      FRIGATE_CAMERA_PASSWORD: "supersecret"

    devices:
#      - /dev/bus/usb:/dev/bus/usb # for USB Coral
      - /dev/apex_0:/dev/apex_0

The Frigate configuration:
Code:
detectors:
  coral1:
    type: edgetpu
    device: pci:0

Several instructions mentioned the following points which I never figured out -
  • MSI-X - From Googling this it is the stuff of nightmares but seems to be fine with the out-of-box configuration
  • Disable Secure Boot - again, this seems to be different in Proxmox 8 and I never found out how to do this for Debian VM with SeaBIOS.
Hope that helps someone.
 
Hey I also asked the same question here: https://forum.proxmox.com/threads/p...-vm-with-pci-coral-tpu-passed-through.151346/

however im really curious to learn why I need to install the drivers etc on proxmox level?
I see the M2 Coral under pci devices and can passthrough to e.g. Debian VM as a "raw device".
Shouldnt that work and I just need to install drivers etc under debian VM?
(which I would test but there is currently an issue with dkms etc and a recent kernel version resulting that the official documentation steps fail)
 
I am not very clear with the lingo. Is the Proxmox host same as Proxmox node (or what ever name one might have given to the node?

Also, the code shared, do I do that at the node shell, or do i do that at the LXC container console?
 
Well assuming that I had to run all the code on the node shell I finally got it to work. The original instructions on top of this thread don't work for a brand new install of Proxmox 8.2.2

These are the steps I took to get drivers installed on Proxmox 8.2.2 node

First had to square the repositories since I don't have a proxmox subscription

Goto repositories and disable all reps that have the word “enterprise” on them. Think there are two of them. If you do have a proxmox license you would skip this step, and probably there wouldn’t be a need to do the sources.list modifications since you should be all squared away with your repositories already.

Code:
nano /etc/apt/sources.list.d/ceph.list

then replace what is there with:

Code:
deb http://download.proxmox.com/debian/ceph-reef bookworm no-subscription

save and exit

then

Code:
nano /etc/apt/sources.list

then replace what is there with:

Code:
deb http://ftp.debian.org/debian bookworm main contrib
deb http://ftp.debian.org/debian bookworm-updates main contrib


# Proxmox VE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription


# security updates
deb http://security.debian.org/debian-security bookworm-security main contrib

save and exit

Once repositories were squared away I ran the following commands:

Code:
apt update
apt install pve-headers
apt-get install
apt remove gasket-dkms
apt install git
apt install devscripts
apt install dh-dkms
apt install dkms

The last command, apt install dkms is not listed on the original instructions, without installing dkms the debuild command won't work.
So I just ran it again on a new proxmox install and after doing apt-get install (or other command) I did get some errors about some things not being signed so they didn't install, but you can just keep going and it should work. It has to do with the registries. I think on my previous install I did go in and disabled all enterprise reps, but on this new setup I didn't

To continue I ran the following commands:

Code:
cd /home
git clone https://github.com/google/gasket-driver.git
cd gasket-driver/
debuild -us -uc -tc -b
cd ..
dpkg -i gasket-dkms_1.0-18_all.deb
apt update && apt upgrade

Something to keep in mind, the original instructions list "cd.." it should actually be "cd .." the space is important. What cd .. does is bring you back one folder, basically back to the /home folder. After that I rebooted proxmox server and ran the following commands to install Coral

Code:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt-get update
apt-get install gasket-dkms libedgetpu1-std
sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
groupadd apex
adduser $USER apex
lspci -nn | grep 089a

When I ran the last command I got:

Code:
got 03:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a

So far so good. I then rebooted the system again, and ran:
ls /dev/apex_0
The original instructions don't mention to reboot, so when I ran ls /dev/apex_0 I got ls: cannot access '/dev/apex_0': No such file or directory, but I did the reboot, and ran the command again
Code:
root@proxmox:~# ls /dev/apex_0
/dev/apex_0

Seems like I got it working!

From there I ran:

Code:
nano /etc/default/grub

and changed toe GRUB_CMDLINE_LINUX_DEFAULT line to:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

Saved and exited. Then ran this command to update the grub

Code:
update-grub

Then we have one last file to modify

Code:
nano /etc/modules

Then added this info

Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

rebooted the system and ran ls /dev/apex_0 just to make sure everything was still ok.

I know it is pretty much same thing as OP posted, but without updating the registries and doing the install dkms command it wouldn't work.

Final step on the proxmox node to passthrough the coral to Lxc container. First you want to create your LXC container if you haven’t done so already. Then run:

Code:
cd /etc/pve/lxc
ls
A list of your lxc containers should show up with container number.conf in my case it was 100 so I ran this:
Code:
nano 100.conf
and added this code at the bottom of lxc.mount.entry list

Code:
lxc.mount.entry: /dev/apex_0          dev/apex_0       none bind,optional,create=file
save and exit.

That is it, you would then follow instructions on how to create your frigate codes, and it should work! At least it did for me!
 
Last edited:
  • Like
Reactions: meaple
Just to clarify, if you want to run it on LXC you'll need local driver support. If you want to use a VM you simply pass the device through. you DO NOT need to build and compile the driver on the proxmox host itself.
 
Like you I am using a debian VM running docker, but I cannot get apex_0 to show up inside the VM or the proxmox host.

From the VM
Code:
colby@mini-docker-1:/srv/FRIGATE$ lsb_release -a
No LSB modules are available.
Distributor ID:    Debian
Description:    Debian GNU/Linux 11 (bullseye)
Release:    11
Codename:    bullseye

colby@mini-docker-1:/srv/FRIGATE$ uname -r
6.1.0-0.deb11.21-amd64

colby@mini-docker-1:/srv/FRIGATE$ lspci -vv |grep Coral
00:10.0 System peripheral: Global Unichip Corp. Coral Edge TPU (prog-if ff)
    Subsystem: Global Unichip Corp. Coral Edge TPU

colby@mini-docker-1:/srv/FRIGATE$ ls /dev/apex*
ls: cannot access '/dev/apex*': No such file or directory

colby@mini-docker-1:/srv/FRIGATE$ lspci -nn | grep 089a
00:10.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]

I have he option to pass through the coral in proxmox

1733176993129.png

I think you are saying it doesn't matter if proxmox apex shows up, but I do need /dev/apex_0 in the VM, is that correct?
I'm unsure what to try next because I have tried so many things, and I have gotten myself confused on what I did on the HOST and the VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!