After a fresh reboot I can see and use my NVIDIA Geforce 3060 GPU on host an in docker containers:
As soon as I use this gpu in my win10 VM I cannot use it on the host machine after shutting down windows (either in windows start menu shutdown or qm stop 100).
This is how my vm config looks like:
After that vm shutdown docker containers can't see and use the gpu anymore as well as host processed like nividia-smi:
Anyone got an Idea what could be causing this behaviour?
Contents of my /etc/default/grub:
Only a complete reboot of proxmox helps.
Code:
root@pve:~# nvidia-smi
Mon Nov 27 18:47:18 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 Off | 00000000:01:00.0 Off | N/A |
| 0% 60C P0 43W / 170W | 1MiB / 12288MiB | 6% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
As soon as I use this gpu in my win10 VM I cannot use it on the host machine after shutting down windows (either in windows start menu shutdown or qm stop 100).
This is how my vm config looks like:
Code:
agent: 1
bios: ovmf
boot: order=ide0;ide2;net0;sata0
cores: 5
cpu: host
efidisk0: fastn:101/vm-101-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
hostpci0: 0000:01:00,pcie=1,x-vga=1
ide0: fastn:101/vm-101-disk-1.qcow2,cache=writethrough,size=350G,ssd=1
machine: pc-q35-8.0
memory: 24576
meta: creation-qemu=8.0.2,ctime=1691479506
name: win10ssd
net0: e1000=06:4F:B9:A3:7B:90,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsihw: virtio-scsi-single
smbios1: uuid=53296502-4346-43a7-aed6-84333ee24a4f
sockets: 2
unused0: fastn:101/vm-101-disk-2.raw
usb0: host=04e8:3301,usb3=1
usb1: host=3302:29c7
usb2: host=093a:2510
usb3: host=1c4f:0015
After that vm shutdown docker containers can't see and use the gpu anymore as well as host processed like nividia-smi:
Code:
root@pve:~# nvidia-smi
Failed to initialize NVML: Unknown Error
Anyone got an Idea what could be causing this behaviour?
Contents of my /etc/default/grub:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=vesafb:off video=efifb:off initcall_blacklist=sysfb_init
Only a complete reboot of proxmox helps.
Last edited: