Im using Proxmox 4.3-71 on a Dell R730. Besides a lot of CPU cores we also have 1x Asus 970 Strix GPU in the system and are planning on upgrading to 2x bigger GPU for more computation power. The GPU is passed through to a VM using PCI Passthrough.
The system was set up one year ago with GPU passthrough working nicely (both Windows and Linux). I came back a couple of days ago to see the GPU was no longer working properly inside the VM after updating to the newest version (don't remember the old version). The system would randomly crash the driver but 'nvidia-smi' was still working. Most CUDA samples would either run through a single time or block indefinitely till the VM was restarted. Driver version 370.28 with CUDA 8.
So first off, I managed to fix the problem. The error was in the CPU options:
Generated by Proxmox: -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,kvm=off (not working)
Working: -cpu host,kvm=off
Is there a way to disable the +kvm_pv_unhalt and +kvm_pv_eoi option in Proxmox for this single vm using the conf file or an interface provided by proxmox?
The system was set up one year ago with GPU passthrough working nicely (both Windows and Linux). I came back a couple of days ago to see the GPU was no longer working properly inside the VM after updating to the newest version (don't remember the old version). The system would randomly crash the driver but 'nvidia-smi' was still working. Most CUDA samples would either run through a single time or block indefinitely till the VM was restarted. Driver version 370.28 with CUDA 8.
So first off, I managed to fix the problem. The error was in the CPU options:
Generated by Proxmox: -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,kvm=off (not working)
Working: -cpu host,kvm=off
Is there a way to disable the +kvm_pv_unhalt and +kvm_pv_eoi option in Proxmox for this single vm using the conf file or an interface provided by proxmox?