Hello.
I am trying to get hw-transcoding setup for my Plex container in Proxmox 8. The following is the guide I followed:
4.
5. Start your container.
6. Download and install the NVIDIA drivers from their website within the LXC container. When you run the downloaded installer file, use the switch
:
7. After installation, stop and start the container. Then, within the container, run 'nvidia-smi' to verify everything works correctly.
I got to the point of setting up the drivers in the lxc container and verifying using
, but received the following error:
I am not sure where I went wrong.
Please see the attached images for the questions and warnings during installation. I selected YES for all answers. Not sure if that caused any errors.
https://imgur.com/eLjcGHy
https://imgur.com/GyexVIY
https://imgur.com/LurgybX
https://imgur.com/WauJzu8
https://imgur.com/x9abMxe
I am trying to get hw-transcoding setup for my Plex container in Proxmox 8. The following is the guide I followed:
- Install dkms on the PVE host via
Code:
apt-get install dkms
- Install latest pve-headers:
Code:
apt install pve-headers
- Download and Install the proprietary NVIDIA drivers from the NVIDIA website on the PVE host and reboot. Verify correct installation using
Code:
nvidia-smi
Code:
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/535.104.05/NVIDIA-Linux-x86_64-535.104.05.run
Code:
chmod 777 ./NVIDIA-Linux-x86_64-535.104.05.run
Code:
./NVIDIA-Linux-x86_64-535.104.05.run --kernel-source-path /usr/src/linux-headers-6.2.16-12-pve
4.
- Stop your container. You need to modify your LXC container definition file using your favourite text editor to pass the NVIDIA /dev entries from the host to the LXC. If your LXC container ID is 100, then you edit /etc/pve/lxc/100.conf and add the following lines to the end of the file:
Code:
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 509:* rwm
lxc.cgroup2.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
5. Start your container.
6. Download and install the NVIDIA drivers from their website within the LXC container. When you run the downloaded installer file, use the switch
Code:
--no-kernel-module
Code:
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/535.104.05/NVIDIA-Linux-x86_64-535.104.05.run
Code:
chmod 777 ./NVIDIA-Linux-x86_64-535.104.05.run
Code:
./NVIDIA-Linux-x86_64-535.104.05.run --no-kernel-module
7. After installation, stop and start the container. Then, within the container, run 'nvidia-smi' to verify everything works correctly.
I got to the point of setting up the drivers in the lxc container and verifying using
Code:
nvidia-smi
Code:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running
I am not sure where I went wrong.
Please see the attached images for the questions and warnings during installation. I selected YES for all answers. Not sure if that caused any errors.
https://imgur.com/eLjcGHy
https://imgur.com/GyexVIY
https://imgur.com/LurgybX
https://imgur.com/WauJzu8
https://imgur.com/x9abMxe