Search results for query: nvidia-smi

  1. D

    [SOLVED] PVE 9.1.4 / NVIDIA Tesla T4 / vGPU 19.3 Installation

    ...L4 heute am Proxmox 9.1.4 mit dem neuen Treiber zu versuchen, und es hat alles so geklappt, wie es im Proxmox wiki dokumentiert ist. nvidia-smi gibt den korrekten output aus, den ich von ESXi gewohnt bin, besser war :D: Könnte mir vorstellen, dass es bei Dir nicht klappt, weil noch der...
  2. R

    [SOLVED] PVE 9.1.4 / NVIDIA Tesla T4 / vGPU 19.3 Installation

    ...ab Ampere und neuer nötig sein: systemctl enable --now pve-nvidia-sriov@ALL.service Aber nach einem Reboot sieht das wieder so aus: # nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running...
  3. Impact

    LXC config problem after core reboot

    May I introduce you to the NVIDIA container toolkit way? https://gist.github.com/Impact123/3dbd7e0ddaf47c5539708a9cbcaab9e3#nvidia-specific Does calling nvidia-smi before starting the CT change anything?
  4. B

    LXC config problem after core reboot

    ...none bind,optional,create=file lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file lxc.mount.entry: /usr/bin/nvidia-smi usr/bin/nvidia-smi none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file lxc.mount.entry...
  5. 0

    RTX 5060 Ti VFIO passthrough on AMD X570 consumer motherboard fails – Firmware 1:1 IOMMU issue

    what What motherboard do you have? I can get mine to work and nvidia-smi sees it.. according to ChatGPT I have to enable the following: Advanced → AMD CBS → SVM Mode: Enabled Advanced → AMD CBS → SVM Lock: Disabled Advanced → IOMMU: Enabled Advanced → PCIe ACS Control: Enabled Advanced →...
  6. C

    GPU Passthrough not respecting secondary GPU

    ...1080 GPU to a VM (after adding the PCI-E device via webui), however the second gpu the 1060 is not available for the host: root@tower8:~# nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running...
  7. R

    [SOLVED] PVE 9.1.4 / NVIDIA Tesla T4 / vGPU 19.3 Installation

    ...(rev a1) Subsystem: NVIDIA Corporation Device [10de:12a2] Kernel modules: nvidiafb, nouveau, nvidia_vgpu_vfio, nvidia # nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running...
  8. J

    [SOLVED] GPU issues, no nvidia in /dev

    Just had to fresh install proxmox 9 after an upgrade attempt caused a kernel panic. I tried restoring my docker-frigate LXC but the restore is skipping a bunch of files so I decided to pull the config files from the backup and start fresh. Got nvidia drivers installed on proxmox and started...
  9. D

    NVIDIA Driver Passthrough Failure for OpenWebUI

    Your issue isn't the Nvidia driver; it's a repository conflict. You are trying to install dependencies for Intel GPUs on a system meant for Nvidia. The error level-zero-dev comes from the Intel repositories listed in your apt update log (Hit:4 and Hit:5). This is breaking your package manager.
  10. A

    NVIDIA DKMS fails on Proxmox VE 9 / kernel 6.17.4 (Quadro P2000)

    ...so I added non-free and tried the Debian NVIDIA packages: apt-get install --no-install-recommends nvidia-kernel-dkms nvidia-driver-bin nvidia-smi nvidia-modprobe DKMS fails when building nvidia-current/550.163.01 for 6.17.4-1-pve. DKMS error snippet It fails in nvidia-drm with DRM API...
  11. S

    [TUTORIAL] Proxmox 9.1; nvidia drivers; desktop GUI

    ...NVIDIA Corporation GP107GL [Quadro P620] [10de:1cb6] (rev a1) Subsystem: Dell Device [1028:1264] Kernel driver in use: nvidia nvidia-smi +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.119.02 Driver Version...
  12. S

    [TUTORIAL] Proxmox 9.1; nvidia drivers; desktop GUI

    ...NVIDIA Corporation GP107GL [Quadro P620] [10de:1cb6] (rev a1) Subsystem: Dell Device [1028:1264] Kernel driver in use: nvidia nvidia-smi +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.119.02 Driver Version...
  13. M

    NVIDIA Driver Passthrough Failure for OpenWebUI

    ...the Nvidia proprietary setup about a missing x - config file and a recommendation to install pkg-config. Regardless, I pushed through and nvidia-smi or NVtop displayed the gpus and all seemed well, or so I thought. I next passed through the driver into OpenWebUI LXC created from the Proxmox...
  14. D

    [TUTORIAL] NVIDIA drivers instalation Proxmox and CT

    Hello, @jwelvaert: Can you test this method. Proxmox Host Information: Group : 195, 234, 237 /etc/pve/lxc/100.conf LXC: (Debian 12 / Kernel 6.8.12-17-pve)
  15. O

    Dual GPU Nvidia L40s Support on single VM

    Hi, Did you turn on SR-IOV in BIOS? I assigned 2 gpus to resources mapping in datacenter. And I added 2 gpus to VM. Finally, I could see 2 gpus by nvidia-smi in ubuntu 24.04 server VM.
  16. O

    Dual A100 Passthrough to Ubuntu 22.04 VM - Proxmox8

    Hi, Did you turn on SR-IOV in BIOS? I assigned 2 gpus to resources mapping in datacenter. And I added 2 gpus to VM. Finally, I could see 2 gpus by nvidia-smi in ubuntu 24.04 server VM.
  17. B

    my experience with proxmox + thunderbolt eGPU

    ...but for Nvidia. Maybe you could try pci=noaer as they outline here: https://github.com/NVIDIA/open-gpu-kernel-modules/pull/981#issuecomment-3621315260. and here https://egpu.io/forums/thunderbolt-linux-setup/rtx-5080-via-thunderbolt-5-egpu-hard-lock-on-cuda-operations-nvidia-smi-works-at-idle/
  18. Z

    Issues moving from NVIDIA RTX3080 GPU passthrough to AMD RX 9070 XT

    Yes this is how it worked with multiple nvidia GPU's that have passed through this machine except the display would change to the guest os, with the AMD GPU the display remains on with the host dialogue and frozen cursor. Output is attached below, I did update the script to my device's PCI ID...
  19. U

    NVIDIA L4 GPU (AD104GL)

    Just done exactly that for a customer ;) Not tested yet, bit nvidia-smi sees the card inside the container, so i think it will work additional lxc-config entries: lxc.cgroup2.devices.allow: c 195:* rwm lxc.cgroup2.devices.allow: c 234:* rwm lxc.cgroup2.devices.allow: c 509:* rwm...
  20. B

    vGPU doesn't work with pytorch/nccl/vllm

    ...DPP to train a model, the execution fails with: CUDA error: operation not supported. The vGPU license was active at the time of testing: nvidia-smi -q | grep "License Status" License Status : Licensed (Expiry: 2025-11-22 6:1:40 GMT) As a matter of fact, we tried...