Search results for query: nvidia-smi

  1. S

    Successful Dual NVIDIA H200 NVL Passthrough + Full NVLink (NV18) on Proxmox VE 8.4 (HPE DL385 Gen11)

    ...+ OVMF) Guest OS: • Ubuntu 22.04 • Ubuntu 24.04 • NVIDIA driver 580.95.05 • CUDA 13.0 Result: Both GPUs passed through successfully. nvidia-smi nvlink --status shows all 18× NVLink lanes active per GPU (26.562 GB/s each), meaning full NVLink (NV18) is functional inside a VM. Measured...
  2. J

    Fehler mit Docker Container nach Linux update

    ...APUs von AMD gibt finde ich es eigentlich charmant von Docker im LXC bist zu Host Transparent sehen zu können was mit Ollama/LLMs/CUDA/Nvidia-smi passiert. Bei dieser Lösung, die eine valide Alternative ist, gibt es keine sinnlose Bindung von GPUs an Audio-Devices die alles brechen wenn man...
  3. S

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    ...550.163.02.patch. Installed vgpu-unlock-rs from https://github.com/mbilker/vgpu_unlock-rs. Rebooted, and viola, have P40 profiles :) nvidia-smi Wed Nov 12 08:12:24 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.163.02...
  4. Z

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    you want to change the line in vgpu_unlock-rs/src/lib.rs on 370-377 where it says Tesla T4 and gives a hex code, change the 1EB8 to 1EB9 for the edition of the T4 with 32GB (cant find anything about the card itself as a 32gb edition but a few lists give that option or list it as a quadro 6000...
  5. S

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    ...16GB. When I go to create a 3rd VM, it says I have zero devices available for 8GB (two VM's are currently running with nvidia-233): nvidia-smi vgpu Tue Nov 11 17:09:58 2025 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 550.144.02...
  6. T

    [NVidia] How to use one GPU as PCI VM passthrough and the other as shared compute?

    ...> /sys/bus/pci/devices/$DEV/driver_override done fi modprobe -i vfio-pci And this works! ...for about 5 minutes. At first, nvidia-smi returns real values. After that, I start getting: root@pve:~# nvidia-smi Tue Nov 11 15:41:31 2025...
  7. S

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    ...per @zenowl77 with a proxmox 8.4 vm, but getting same results: Patched and got 'NVIDIA-Linux-x86_64-570.172.07-vgpu-kvm-custom.run' Both nvidia-smi calls return showing gpus, but mdevctl types is blank. Here's what dmesg showing: dmesg -T | grep vgpu [Sat Nov 8 19:38:15 2025]...
  8. S

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    ...P6000 and a Titan Xp. I've patched 16.9 NVIDIA-Linux-x86_64-535.230.02-vgpu-kvm-custom.run with 535.230.02.patch, it installs fine, nvidia-smi returns both cards, as well as nvidia-smi vgpu, but nothing in medctl. nvidia-smi Sat Nov 8 18:36:23 2025...
  9. B

    vGPU doesn't work with pytorch/nccl/vllm

    sure! $ nvidia-smi Fri Nov 7 14:13:38 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |...
  10. W

    vGPU doesn't work with pytorch/nccl/vllm

    Did you chose correct vgpu profile and set Nvidia license token in vm? Do you mind to show nvidia-smi nvidia-smi -q From vm?
  11. T

    Passthrough 4090 Lockup Linux Only

    ...One thing I did notice is that, after installing the nvidia driver on the host machine, if I tried to unbind from vfio and bind to nvidia, nvidia-smi wouldn't show the 4090, so I did lspci -nnk -d 10de: and it showed the active driver being nvidia for both the 4090 and the 970. But nvidia-smi...
  12. B

    vGPU doesn't work with pytorch/nccl/vllm

    ...for VDIs. For LLM Inference we have mapped 4 vGPUs into a Virtual machine. All 4 vGPUs show up correctly in the guest system (using nvidia-smi). We setup vLLM (0.11.0) and torch 2.8.0+cu128 in a python virtual environment. To our understanding torch comes with a pre-compiled cuda + nccl...
  13. K

    Power Consumption when GPU idle with Passthrough

    ...would stay in Power State P0 - turns out the default persistenced mode for the daemon on my install was "off" (you can verify by running "nvidia-smi -pm 1" to turn it on and see the difference). I modified /etc/systemd/system/nvidia-persistenced.service to set the default as...
  14. T

    [SOLVED] NVIDIA vGPU - No devices were found

    ...erkannt. ./NVIDIA-Linux-x86_64-580.95.02-vgpu-kvm.run --accept-license --no-questions --ui=none --kernel-module-type=proprietary --dkms nvidia-smi Mon Nov 3 09:48:41 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.95.02...
  15. T

    [SOLVED] NVIDIA vGPU - No devices were found

    ...mit NVIDIA Grid. Auf einem SuperMicro GPU Host wird nach Update der Grid Host Treiber von 550 auf 570 unsere RTX A5000 nicht mehr über nvidia-smi erkannt. Nach Downgrade auf 550 funktioniert wieder alles einwandfrei. Proxmox 8.4 (letzter 8.X Stand) Kernel: 6.8.12-15-pve Es sind mehrere...
  16. D

    3 Minute Delay Starting VM with GPU Passthrough (vfio-pci reset issue)

    ...|| true sleep 1 # Kurze Pause # GPU auf dem Host initialisieren (genau wie in Ihrem manuellen Script) nvidia-smi || true log "GPU erfolgreich an Host zurückgegeben." ;; esac exit 0 There is no vendor-reset like for AMD GPUs for NVIDIA is it?
  17. D

    3 Minute Delay Starting VM with GPU Passthrough (vfio-pci reset issue)

    ...sleep 2 echo "$GPU" > /sys/bus/pci/drivers/nvidia/bind echo "$AUDIO" > /sys/bus/pci/drivers/snd_hda_intel/bind nvidia-smi >/dev/null 2>&1 log "GPU successfully returned to host." ;; esac exit 0 When I shut down the VM, the post-stop script fails with "Device or resource...
  18. J

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    If you intend to use the gpu on the host or in LXC containers, as opposed to passing through to a VM, don’t blacklist nvidia*
  19. R

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    ...Adding boot menu entry for UEFI Firmware Settings ... done Setting up proxmox-kernel-6.17 (6.17.1-1) ... root@pve-bdr:~# nvidia-smi Wed Oct 15 07:12:54 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.95.05...
  20. P

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    ...updated to kernel 6.17, updated NVIDIA drivers from 570.x series to latest 580.x and I don't have any hardware transcoding, even though nvidia-smi shows GPU information on both host and in Plex container...