Swaping my Passthrough target from VM to LXC

Helio Mendonça

Active Member
Apr 10, 2019
73
6
28
Hi was able to passthrough my Quadro P400 GPU to a VM successfully doing the following in my Proxmox v7.1 host:

Code:
# Add IOMMU Support

nano /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

update-grub

# Load VFIO modules at boot

nano /etc/modules

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Edit several files

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf

# Configure GPU for PCIe Passthrough

lspci -v

0c:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P400] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: NVIDIA Corporation GP107GL [Quadro P400]
        Flags: fast devsel, IRQ 5, IOMMU group 15
        Memory at f4000000 (32-bit, non-prefetchable) [disabled] [size=16M]
        Memory at d0000000 (64-bit, prefetchable) [disabled] [size=256M]
        Memory at e0000000 (64-bit, prefetchable) [disabled] [size=32M]
        I/O ports at e000 [disabled] [size=128]
        Expansion ROM at f5000000 [disabled] [size=512K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [420] Advanced Error Reporting
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] Secondary PCI Express
        Kernel modules: nvidiafb, nouveau

0c:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
        Subsystem: NVIDIA Corporation GP107GL High Definition Audio Controller
        Flags: bus master, fast devsel, latency 0, IRQ 94, IOMMU group 15
        Memory at f5080000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel

lspci -n -s 0c:00

0c:00.0 0300: 10de:1cb3 (rev a1)
0c:00.1 0403: 10de:0fb9 (rev a1)

echo "options vfio-pci ids=10de:1cb3,10de:0fb9 disable_vga=1"> /etc/modprobe.d/vfio.conf

update-initramfs -u

reboot

Now, I want to use a LXC instead of a VM, and for that I saw this link: https://theorangeone.net/posts/lxc-nvidia-gpu-passthrough/

But I presume that this explanation is for a "clean" proxmox and mine already got the changes I wrote above.
So my question is, what above steps I must cancel?

For instance, should I remove "GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on" line from the grub?
What about, the contents of the /etc/modules file?
I guess that now the "blacklist nvidia" should be removed from the /etc/modprobe.d/blacklist.conf, right?

Any tips are welcome.
Regards
Hélio
 
Caveat: I have zero experience with PCI pass-through on Proxmox so far.

But I've plenty of experience with oVirt, Xen, vSphere and plain KVM, so....

Pass-through only happens when there is a hypervisor involved. With Proxmox that's KVM.
And it involves quite a few hardware configuration thingies going on to make it happen and effectively
a) remove a device from host control
b) fiddle with registers so the device can be mapped and (safely) managed by a VM
c) make the device visible to a VM

and in your case, the reverse needs to cover all those steps, too.

None of that is required for containers (without VMs involved), where again my main experience is with OpenVZ not LXC: Basically a container is just a set of ordinary processes with some kernel induced delusions about resources.

So something like a "fluid real-time-remapping" of devices between containers and VMs is about dealing with a level of hardware configuration complexity most OS reserve for booting; they never want to go through that again: hot-plugging is a lot more popular with users than with operating systems!

Now containers have picked up some kernel isolation features from VMs, name spaces and control groups allow for a lot of middle ground nobody properly top-down designed into Unix and its descendents: current efforts are all about retrofitting new abstractions without breaking too much.

But, yeah, moving devices between VMs and the host (and from there to a container) is essentially rewiring the hardware and at the very least require rebooting the host.

In an ideal world that could be done easily and without rebooting, but so much idealism doesn't even work with real-world software IMHO.

But more concretly after re-reading your actual question (sorry!):

Enabling IOMMU capabilities doesn't do any harm (apart from triggering potential BIOS and driver bugs), so it's generally safe to leave that.

But you want the host kernel to manage the GPU for the LXC containers and you need to let it do that by not blacklisting the driver: AFAIK there is no way that containers get exclusive control over devices or can load or run device drivers: control groups and user ID mappening are likely to ensure that any attempt to load drivers from a container will fail, even with "root privileges" within the container.

However, loading the driver on the host, might not make it available and controlable via the LXC container, because of the way CUDA and LXC are written. In my OpenVZ case from a couple of years ago, OpenVZ would just hide some devices files CUDA was desperate to check, if only to see if CUDA was actually installed: it failed for no good reason and without access to the CUDA runtime there was no way to solve it.

So you need to ensure CUDA is back on the host first and then hope it now works with LXC containers.

Please post, because it's something I've been wanting to do for ages on OpenVZ.

I only managed to get it done with Docker and I believe in running PaaS Docker containers inside IaaS abstraction containers such as OpenVZ and LXC, including CUDA.
 
  • Like
Reactions: Helio Mendonça
To revert, action the following to start with:
  1. Remove/comment blacklist nvidia from /etc/modprobe.d/blacklist.conf
  2. Remove/comment options vfio-pci ids=10de:1cb3,10de:0fb9 disable_vga=1 from /etc/modprobe.d/vfio.conf
  3. rerun the initramfs update.
  4. reboot
  5. Install Nvidia tools on host, create LXC, lxc drivers,etc - follow instructions in the Link you provided

Docker specific, but may also be a useful reference https://jellyfin.org/docs/general/a...n/nvidia/#configure-with-linux-virtualization
 
  • Like
Reactions: Helio Mendonça
Hi
I already removed/commented the things you suggested.
Finally I rerun the initramfs update and rebooted proxmox.

When I tried to install the Nvidia drivers I got an error saying that vfio drivers were in use.
So I also commented the vfio in /etc/modules, and rerun the initramfs update and rebooted proxmox again.

But now, when I try again to install the Nvidia drivers I get the following errors:

Code:
ERROR: Unable to load the kernel module 'nvidia-drm.ko'.  This happens most
         frequently when this kernel module was built against the wrong or
         improperly configured kernel sources, with a version of gcc that
         differs from the one used to build the target kernel, or if another
         driver, such as nouveau, is present and prevents the NVIDIA kernel
         module from obtaining ownership of the NVIDIA device(s), or no NVIDIA
         device installed in this system is supported by this NVIDIA Linux
         graphics driver release.

         Please see the log entries 'Kernel module load error' and 'Kernel
         messages' at the end of the file '/var/log/nvidia-installer.log' for
         more information.


ERROR: The nvidia-drm kernel module failed to load. This kernel module is
         required for the proper operation of DRM-KMS. If you do not need to
         use DRM-KMS, you can try to install this driver package again with
         the '--no-drm' option.

ERROR: Installation has failed.  Please see the file
         '/var/log/nvidia-installer.log' for details.  You may find
         suggestions on fixing installation problems in the README available
         on the Linux driver download page at www.nvidia.com.

Should I really try to install the drivers using the --no-drm?
Is this safe or there is another preferable way of doing the installation?
Thanks

PS - By the way, I see a "blacklist nvidiafb" line in the "/etc/modprobe.d/pve-blacklist.conf" file.
Should I also comment this line?
 
Last edited:
My guess would be that the driver is not compatible with the kernel version. I would even go as far that NVidia probably did not (yet) release a driver that works with Linux kernel version 6.2. If this is the case, then either use an older Proxmox kernel (match version with driver) or contact NVidia support. Other people on this forum ran into this issue.
If it's not because of the kernel version, then make sure you have installed the Proxmox kernel headers. Other people on this forum forgot to do this.
Some of the error messages mention /var/log/nvidia-installer.log, maybe you can find a clue about the problem in there. Maybe look at /var/log/nvidia-installer.log to see that the actual problem is? Otherwise we're just guessing.
 
  • Like
Reactions: Helio Mendonça
Hi
My guess would be that the driver is not compatible with the kernel version. I would even go as far that NVidia probably did not (yet) release a driver that works with Linux kernel version 6.2. If this is the case, then either use an older Proxmox kernel (match version with driver) or contact NVidia support. Other people on this forum ran into this issue.
If it's not because of the kernel version, then make sure you have installed the Proxmox kernel headers. Other people on this forum forgot to do this.
Some of the error messages mention /var/log/nvidia-installer.log, maybe you can find a clue about the problem in there. Maybe look at /var/log/nvidia-installer.log to see that the actual problem is? Otherwise we're just guessing.

I believe I am still using kernel version 5.13:
Code:
root@pve:~# uname -r
5.13.19-4-pve

I think also that I have Proxmox kernel headers updated:
Code:
root@pve:~# apt install pve-headers-$(uname -r)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
pve-headers-5.13.19-4-pve is already the newest version (5.13.19-9).
0 upgraded, 0 newly installed, 0 to remove and 194 not upgraded.

The /var/log/nvidia-installer.log can be seen here: https://pastebin.com/rAkqwUUS
But I do not see much more than the error messages I already shown earlier.

Please noticed the two points I already mentioned:
  • the suggestion of the installation program to run it with the --no-drm.
    Is this safe or there is another preferable way of avoiding this error?
  • the "blacklist nvidiafb" line in the "/etc/modprobe.d/pve-blacklist.conf" file.
    Should I also comment this line?

Any suggestion?
Thanks
 
The /var/log/nvidia-installer.log can be seen here: https://pastebin.com/rAkqwUUS
But I do not see much more than the error messages I already shown earlier.
Can you check if the Proxmox kernel configuration has DRM_KMS_HELPER not enabled?
Please noticed the two points I already mentioned:
  • the suggestion of the installation program to run it with the --no-drm.
    Is this safe or there is another preferable way of avoiding this error?
I don't see why this would not be safe. It looks like enabling DRM_KMS_HELPER in the kernel configuration might fix it. Or maybe try insmod drm and maybe other modules that are needed by the NVidia proprietary driver, before installing it?
  • the "blacklist nvidiafb" line in the "/etc/modprobe.d/pve-blacklist.conf" file.
    Should I also comment this line?
This is a completely unrelated driver and is unlikely to change anything for the compilation process of the NVidia proprietary driver. I suggest you keep it.
Any suggestion?
Maybe build your own variant of the Proxmox kernel with all the settings and modules built-in that are required for the NVidia proprietary driver?
 
  • Like
Reactions: Helio Mendonça
Mean while I tried this:
Code:
root@pve:~# dmesg | grep drm_kms_helper
[  259.532326] drm_kms_helper: Unknown symbol cec_delete_adapter (err -2)
[  259.532553] drm_kms_helper: Unknown symbol cec_transmit_attempt_done_ts (err -2)
[  259.532987] drm_kms_helper: Unknown symbol cec_s_phys_addr (err -2)
[  259.533029] drm_kms_helper: Unknown symbol cec_s_conn_info (err -2)
[  259.533100] drm_kms_helper: Unknown symbol cec_s_phys_addr_from_edid (err -2)
[  259.534142] drm_kms_helper: Unknown symbol cec_unregister_adapter (err -2)
[  259.534480] drm_kms_helper: Unknown symbol cec_allocate_adapter (err -2)
[  259.534892] drm_kms_helper: Unknown symbol cec_fill_conn_info_from_drm (err -2)
[  259.534976] drm_kms_helper: Unknown symbol cec_received_msg_ts (err -2)
[  259.535187] drm_kms_helper: Unknown symbol cec_register_adapter (err -2)
[  259.828450] nvidia_drm: Unknown symbol drm_kms_helper_poll_fini (err -2)
[  259.828521] nvidia_drm: Unknown symbol drm_kms_helper_poll_disable (err -2)
[  259.828594] nvidia_drm: Unknown symbol drm_kms_helper_poll_init (err -2)
[  259.829660] nvidia_drm: Unknown symbol drm_kms_helper_hotplug_event (err -2)
[  658.277686] drm_kms_helper: Unknown symbol cec_delete_adapter (err -2)
[  658.277902] drm_kms_helper: Unknown symbol cec_transmit_attempt_done_ts (err -2)
[  658.278242] drm_kms_helper: Unknown symbol cec_s_phys_addr (err -2)
[  658.278273] drm_kms_helper: Unknown symbol cec_s_conn_info (err -2)
[  658.278327] drm_kms_helper: Unknown symbol cec_s_phys_addr_from_edid (err -2)
[  658.279211] drm_kms_helper: Unknown symbol cec_unregister_adapter (err -2)
[  658.279473] drm_kms_helper: Unknown symbol cec_allocate_adapter (err -2)
[  658.279773] drm_kms_helper: Unknown symbol cec_fill_conn_info_from_drm (err -2)
[  658.279838] drm_kms_helper: Unknown symbol cec_received_msg_ts (err -2)
[  658.280021] drm_kms_helper: Unknown symbol cec_register_adapter (err -2)
[  658.574120] nvidia_drm: Unknown symbol drm_kms_helper_poll_fini (err -2)
[  658.574191] nvidia_drm: Unknown symbol drm_kms_helper_poll_disable (err -2)
[  658.574259] nvidia_drm: Unknown symbol drm_kms_helper_poll_init (err -2)
[  658.575468] nvidia_drm: Unknown symbol drm_kms_helper_hotplug_event (err -2)
[20898.463907] drm_kms_helper: Unknown symbol cec_delete_adapter (err -2)
[20898.464082] drm_kms_helper: Unknown symbol cec_transmit_attempt_done_ts (err -2)
[20898.464432] drm_kms_helper: Unknown symbol cec_s_phys_addr (err -2)
[20898.464463] drm_kms_helper: Unknown symbol cec_s_conn_info (err -2)
[20898.464518] drm_kms_helper: Unknown symbol cec_s_phys_addr_from_edid (err -2)
[20898.465323] drm_kms_helper: Unknown symbol cec_unregister_adapter (err -2)
[20898.465565] drm_kms_helper: Unknown symbol cec_allocate_adapter (err -2)
[20898.465869] drm_kms_helper: Unknown symbol cec_fill_conn_info_from_drm (err -2)
[20898.465935] drm_kms_helper: Unknown symbol cec_received_msg_ts (err -2)
[20898.466096] drm_kms_helper: Unknown symbol cec_register_adapter (err -2)
[20898.748784] nvidia_drm: Unknown symbol drm_kms_helper_poll_fini (err -2)
[20898.748855] nvidia_drm: Unknown symbol drm_kms_helper_poll_disable (err -2)
[20898.748921] nvidia_drm: Unknown symbol drm_kms_helper_poll_init (err -2)
[20898.750011] nvidia_drm: Unknown symbol drm_kms_helper_hotplug_event (err -2)
 
Just to add that I tried an older driver version (495.44) and the error is the same:
"Unable to load the kernel module 'nvidia-drm.ko'"

I found several topics in this forum (for instance here) about the installation of the Nvidia drivers in the Proxmox host, but none of them refers this problem.
 
Since I do not receive any more tips, I decided to try to install the Nvidia drivers with the no-drm option:
Code:
./NVIDIA-Linux-x86_64-535.104.05.run --no-drm

With that, I got the following warning:
Code:
WARNING: The nvidia-drm module will not be installed. As a result, DRM-KMS
           will not function with this installation of the NVIDIA driver.

And with that in the Proxmox host the drivers I was able to get this:
Code:
root@pve:~# nvidia-smi
Mon Aug 28 13:28:23 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro P400                    Off | 00000000:0C:00.0 Off |                  N/A |
| 28%   42C    P0              N/A /  N/A |      0MiB /  2048MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

The problem now is that, I do not see the file /dev/nvidia-modeset:
Code:
root@pve:~# ls /dev/nvidia*
/dev/nvidia0  /dev/nvidiactl  /dev/nvidia-uvm  /dev/nvidia-uvm-tools

/dev/nvidia-caps:
nvidia-cap1  nvidia-cap2

Something that seems to be required when I try to use the drivers inside a docker container running in the LXC:
Code:
failed to deploy a stack: Network jellyfin_default
Creating Network jellyfin_default
Created Container jellyfin
Creating Container jellyfin
Created Container jellyfin
Starting Error response from daemon:
failed to create task for container:
failed to create shim task:
OCI runtime create failed:
runc create failed:
unable to start container process:
error during container init:
error running hook #0:
error running hook:
exit status 1, stdout:
, stderr:
Auto-detected mode as 'legacy' nvidia-container-cli:
mount error:
stat failed:
/dev/nvidia-modeset: no such file or directory: unknown

I think I will have to return running Jellyfin in a VM instead of a LXC!!! :(
 
I tried to find how to check this (and if it is not, how to enable it) but did not found how to do both things.
Can you please explain me how to do it?
I also don't know and had to google for it: zgrep DRM_KMS_HELPER /boot/config- but you have to add your kernel version. It's probably a module (=m). You can only change it by building the Proxmox kernel yourself but I don't think that is necessary.
Did you try running modprobe drm_kms_helper before installing the proprietary NVidia driver?

I think I will have to return running Jellyfin in a VM instead of a LXC!!! :(
Did you ask NVidia support or other forums more knowledgeable about NVidia? Installing their closed-source driver is not specific to Proxmox but can be difficult on lots of other GNU/Linux distributions. I don't have NVidia devices (for year for that reason) and can only guess at what their requirements are...
 
  • Like
Reactions: Helio Mendonça
I also don't know and had to google for it: zgrep DRM_KMS_HELPER /boot/config- but you have to add your kernel version. It's probably a module (=m). You can only change it by building the Proxmox kernel yourself but I don't think that is necessary.
Did you try running modprobe drm_kms_helper before installing the proprietary NVidia driver?


Did you ask NVidia support or other forums more knowledgeable about NVidia? Installing their closed-source driver is not specific to Proxmox but can be difficult on lots of other GNU/Linux distributions. I don't have NVidia devices (for year for that reason) and can only guess at what their requirements are...

Yes, it says:
Code:
root@pve:~# zgrep DRM_KMS_HELPER /boot/config-5.13.19-4-pve
CONFIG_DRM_KMS_HELPER=m

I did not try to run your command before the installation but it even now seems to be unknown:
Code:
root@pve:~# modprobe drm_kms_helper
modprobe: ERROR: could not insert 'drm_kms_helper': Unknown symbol in module, or unknown parameter (see dmesg)

I tried to find something here but without a lot of success.

Thanks any way...
 
Last edited:
Sorry I didn't follow up on this...

And I didn't see you mention Docker before...

So far I've restricted myself to running GPUs inside VMs and using Docker inside those VMs to run CUDA workloads: that process is a bit more involved than running CUDA and Docker on a bare metal host, because you first have to get GPU pass-through working, but it's pretty easy after that, because it's so much the same as bare metal CUDA + Docker.

I've tons of trouble when I was trying to run Docker + CUDA with oVirt/RHV on the same host, so I guess Proxmox may not be more forgiving.

I haven't looked into LXC + Docker, but as I said before, OpenVZ + Docker is a much happier mariage when you don't try to add CUDA.

I'd still suggest splitting the problem before you go crazy.

Docker does all sorts of crazy things with the network and the firewall to do its overlay network magic. LXC, KVM and Proxmox for that matter may do similar things and they may not be aware of each other. It took me a while to realize that installing oVirt (which is also KVM plus orchestration) destroyed my Docker networking in very awkward ways.

Then there is the potential issue of control groups, name spaces and capabilities interfering with the CUDA runtime: Nvidia has done Docker integration work, but no LXC integration. If you have lots of spare time on your hands I'd like to see your results, but in practical terms you might just be better off passing through the GPU to a VM and run Docker and CUDA inside that.

That's how I operate currenty on both Proxmox and oVirt/RHV, because the GPU will be the main resource for the CUDA workloads, they don't need lots of CPU and RAM to do their thing, which is why other non-CUDA VMs (or containers) can still use the machine for other services.

Inside the VM because the GPU is passed through, there really isn't any signficant overhead through virtualization, nothing to slow its CUDA crunching on. And there you don't have to worry about LXC, KVM and Docker fighting over the network or control groups.

So you can try to have the host gain back control over all of the GPU by deleting all the VM pass-through stuff and run CUDA without and LXC or Docker first "bare metal" beside the hypervisor and outside a VM.

And then you can try adding the layers one-by-one, first LXC, then Docker inside LXC (still not sure if you actually want to use Docker).

When it comes to runing several CUDA workloads side by side, the CUDA augmented Docker will most likely work better (even inside a VM), because it's designed for that and offers some extra controls for multi-GPU systems.

LXC and OpenVZ aim for IaaS abstractions and would therefore tend to isolate containers to the degree that their fight over GPU resources might not end well.

I haven't really looked at LXC since 2008 when I went with OpenVZ instead (while Proxmox went the other way ;-) but my impression is rather firm, that I'd rather want CUDA workloads stable in a VM with Docker inside, especially since there is a lot of CI-CD stuff hooked into Docker than experimenting with LXC.

Hope this helps!
 
I actually haven't gotten to CUDA on Docker with Proxmox yet, because I am testing this in the home-lab.

And there I was tempted to try it first with Windows, a version 11 as it happens. And that went swimmingly, once I was able to make the iommu kernel command line paramters stick, which were not taken from /etc/default/grub as documented... but had to be edited into /etc/kernel/cmdline.

I ran it with an RTX 2080ti on a Haswell Xeon E5-2696v3, which is officially shunned by Windows 11 (but works just fine in a VM) and then with an RTX 4090 on a Ryzen 9 5950X next, which enables IOMMU all on its own.

(Too bad libvirt clashes with Proxmox because I've come to rely on virt-host-validate to help with IOMMU diagnostics...)

Passed through the GPU and all the USB controllers so that it felt pretty much like running the Windows PC natively, while in fact its disk came from a three node HCI Ceph cluster on different nodes.

I even passed through a nice old dual FusionIO 2.4TB IOdrive on the Haswell Xeon to the VM, which work just as fine and held my Steam game cache...

No visible loss of performance for games, G-Sync, 144Hz refresh, HDR and dual 4k screens worked just fine, the GPUs (and the FusionIO) really feel native on Windows, all the USB peripherals just as well.

That quite impressed me because I remember how much more difficult this used to be with KVM-only or even with oVirt, initially.

After that I'm rather convinced that Linux VMs with CUDA, with Docker or without, will work perfectly, too.
 
Note that my goal is to use my Quadro P400 on a LXC not on a VM (which I already had accomplished before).
My current problem is to install properly the Nvidia drivers in my Proxmox v7.1 host.
But I intend to update it to version 8 and maybe then I could do it without the current issues.
 
Note that my goal is to use my Quadro P400 on a LXC not on a VM (which I already had accomplished before).
My current problem is to install properly the Nvidia drivers in my Proxmox v7.1 host.
But I intend to update it to version 8 and maybe then I could do it without the current issues.
I understood that, but I wanted to give you a little hint there, that that road might be trodden rarely and so full of goblins, that indeed the "less efficient" way (in terms of computing resources) of using a VM, may be more efficient with regards to your time.

But I'm happy to hear your reports with LXC.
 
Note that my goal is to use my Quadro P400 on a LXC not on a VM (which I already had accomplished before).
My current problem is to install properly the Nvidia drivers in my Proxmox v7.1 host.
But I intend to update it to version 8 and maybe then I could do it without the current issues.
Hi , did you manage revert to passthrough to lxc? Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!