When you add unauthenticated, are you saying root? I have passwordless sudo enabled for the user. Is there a feature request open for this to work with normal users already? I can submit one if not.
If you're having issues with this then either deploy a VM with Docker in it or look for alternative install methods other than Docker. I can't help with this as I don't use Frigate, I'd take that to their forums/communities for help. Good luck!
You are a lifesaver, thank you! I forgot to do this part. It's running now.
Btw. is there any reason to worry about these other things that I've mentioned:
nobody:nogroup
missing nvidia-cap1 and nvidia-cap2
errors=remount-ro
I'm assuming this is docker? If so, you need to install the nvidia container toolkit in the container.
# nvidia for docker with nvidia-container-toolkit on the PVE host:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg...
@dasunsrule32 I don't appreciate the full quote but I briefly tested it and quite like it. Not sure why I didn't test it sooner. I'll switch over to that and document it if it works nicely. I'm just not much of a fan of manually editing the CT...
Even with the driver, privileged still isn't needed. You just have to set the card permissions on the /dev/nv* stuff in the container config. This is how we all learn though!
You got this!
Yeah, I know. I started with privileged since it was easier to do the passthrough, or at least I thought so. Now I'm stuck with those with a buch of stuff in them. I'll try porting everything to an unprivileged one
You don't need privileged for Docker. My config above actually will work with Docker as well. Only thing you really need to do is just map permissions to the correct mapped permissions, root for example: 100000:100000 if you're doing bind mounts...
Make a backup and do not use the nvidia hook until you uninstall the manually installed driver.
Run the installer and uninstall the drivers, then you "should" be able to use the bind mound options in the config, delete the card permissions as...
Nope, no need for driver installation. The drivers from the host get bind mounted to the container from the host for use. You can see I outputted the mount output there.
See my post further up...
This is what I'm using on pve9 for the toolkit: cat /etc/apt/sources.list.d/nvidia-container-toolkit.sources
Types: deb
URIs: https://nvidia.github.io/libnvidia-container/stable/deb/amd64/
Suites: /
Components:
Signed-By...
Your way works, and I used that as a workaround when the nvidia hook was broken, but that's an awful lot of work.
Just Install the nvidia-container-toolkit and nvidia drivers on the host and add these lines to your container's config...