I found @LordRatner post very useful and I have now Jellyfin transcoding properly in an unprivileged Proxmox LXC.
But I think we do not need to complicate so much.
The key idea is to install the same version of Nvidia drivers in the host and in the LXC, right?
We can find the right version by...
Note that my goal is to use my Quadro P400 on a LXC not on a VM (which I already had accomplished before).
My current problem is to install properly the Nvidia drivers in my Proxmox v7.1 host.
But I intend to update it to version 8 and maybe then I could do it without the current issues.
Yes, it says:
root@pve:~# zgrep DRM_KMS_HELPER /boot/config-5.13.19-4-pve
CONFIG_DRM_KMS_HELPER=m
I did not try to run your command before the installation but it even now seems to be unknown:
root@pve:~# modprobe drm_kms_helper
modprobe: ERROR: could not insert 'drm_kms_helper': Unknown symbol...
Since I do not receive any more tips, I decided to try to install the Nvidia drivers with the no-drm option:
./NVIDIA-Linux-x86_64-535.104.05.run --no-drm
With that, I got the following warning:
WARNING: The nvidia-drm module will not be installed. As a result, DRM-KMS
will not...
Just to add that I tried an older driver version (495.44) and the error is the same:
"Unable to load the kernel module 'nvidia-drm.ko'"
I found several topics in this forum (for instance here) about the installation of the Nvidia drivers in the Proxmox host, but none of them refers this problem.
I tried to find how to check this (and if it is not, how to enable it) but did not found how to do both things.
Can you please explain me how to do it?
Thanks
Hi
I believe I am still using kernel version 5.13:
root@pve:~# uname -r
5.13.19-4-pve
I think also that I have Proxmox kernel headers updated:
root@pve:~# apt install pve-headers-$(uname -r)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done...
Hi
I already removed/commented the things you suggested.
Finally I rerun the initramfs update and rebooted proxmox.
When I tried to install the Nvidia drivers I got an error saying that vfio drivers were in use.
So I also commented the vfio in /etc/modules, and rerun the initramfs update and...
Hi was able to passthrough my Quadro P400 GPU to a VM successfully doing the following in my Proxmox v7.1 host:
# Add IOMMU Support
nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
update-grub
# Load VFIO modules at boot
nano /etc/modules
vfio
vfio_iommu_type1...
In the manual of the MB (page 33) I have an option saying:
IOMMU
Enables or disables AMD IOMMU support. (Defaut: Auto)
I wonder if this can solve the problem without any slot change!
Despite the problem is solved, maybe as you wrote, changing PCI slot of the DVB Board, could allow the pass-through to the VM, so here are the results of the command you suggested:
root@pve:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU...
Dear @leesteken
Many thanks for your hints that would probably solve the problem but meanwhile, and since the VM was defined to boot on start but nevertheless that took some seconds, I was able to ssh Proxmox just after the boot and check its .conf file:
boot: order=scsi0;ide2;net0
cores: 2...
Hi
After having a working Proxmox 7.1 for more than a year, today I did something that disallow any connection to it (web and ssh) each time I reboot it. In that Proxmox I had several VMs and also a CT where I pass a DVB-S2 board to be used by Tvheadend.
All was working great but I decides...
Thanks for the reply!
But I wonder if in cases like this, Proxmox will distribute the total usage by all available CPUs instead of using heavily just some of them and leave the others with almost nothing to do (as I wrote in my last post).
Hi
I have a Proxmox 7.1 server running on a AMD Ryzen 7 1700 CPU (8 cores/16 threads) which gives me a total of 16 CPUs to be used among my current VMs. All 7 VMs have 2 vCPUs assigned and 6 of them present a CPU usage bellow 5%, while one VM have 25% of CPU usage due to several Docker...
I though on that but I was afraid that after the first mount error (since the TrueNAS VM was still not running) it would not try again.
But from your reply, you are saying that Proxmox will keep trying to mount that NFS share, right?
Hi
It is possible to mount a NFS share in a Proxmox host from a TrueNAS VM running inside the same Proxmox?
At the moment Proxmox boots the TrueNAS still does not exist so I can not do it then using fstab, but what about mounting the NFS share a few minutes later?
For that I could start a little...
Thanks @Dunuin but can you be more precise?
Where do you do that restore? In the webgui or in the cli?
Because the only place I know to restore a VM or CT is by first selecting it in the webgui and only then go to the Backup > (select the backup file) > Restore. But since after a clean install...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.