Hallo,
as holidays begin, I am going to try out Proxmox for my home-PC.
My system specs are as follows:
mainboard: MSI X370 SLI Plus
CPU: Ryzen R7 1700
ram: 16 Gb Vengance LPX
storage: for now, single HDD
graphics: EVGA GTX 1070 SC ACX 3.0
graphics2: some old HD7770 (to be removed)
wifi: Qualcomm Atheros AR93xx Wireless Network Adapter
VM1: Fedora 27
VM2: Windows 10
I managed to get proxmox (latest release) up and running and configured wifi. Next I installed the latest upgrades through apt and installed a kernel 4.14.7, as kernels <4.14 include the npt patch to get better performance in VMs.
Boot options as of now are:
To get the GPU passthrough up and running, I temporarily added the AMD graphics and used the NVIDIA in the second PCIe slot.
With this setup I achieved the following results (hostpci0: xx:00,pcie=1,x-vga=on) :
- VM 1 boots up with both GPUs AMD and NVIDIA
- VM 2 boots with both GPUs AMD and NVIDIA
As I got the NVIDIA graphics running I extracted the rom of my NVIDIA using gpu-z
Next try, still both GPUs installed (hostpci0: 27:00,pcie=1,x-vga=on,romfile=GP104.rom):
- VM1 boots up fine
- VM2 wont show graphical output
To verify that the system will boot with only the NVIDIA graphics installed I removed the AMD GPU and placed the NVIDIA in slot 1:
VM1:
runs fine, with or w/o the rom attached to the hostpci
VM2:
wont show graphical output neither way
The goal is obviously, to get the Windows VM (VM2) running with the NVIDIA graphics as a single GPU.
as holidays begin, I am going to try out Proxmox for my home-PC.
My system specs are as follows:
mainboard: MSI X370 SLI Plus
CPU: Ryzen R7 1700
ram: 16 Gb Vengance LPX
storage: for now, single HDD
graphics: EVGA GTX 1070 SC ACX 3.0
graphics2: some old HD7770 (to be removed)
wifi: Qualcomm Atheros AR93xx Wireless Network Adapter
VM1: Fedora 27
VM2: Windows 10
I managed to get proxmox (latest release) up and running and configured wifi. Next I installed the latest upgrades through apt and installed a kernel 4.14.7, as kernels <4.14 include the npt patch to get better performance in VMs.
Boot options as of now are:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt amd_iommu=1 pcie_acs_override=multifunction video=efifb: off"
To get the GPU passthrough up and running, I temporarily added the AMD graphics and used the NVIDIA in the second PCIe slot.
With this setup I achieved the following results (hostpci0: xx:00,pcie=1,x-vga=on) :
- VM 1 boots up with both GPUs AMD and NVIDIA
- VM 2 boots with both GPUs AMD and NVIDIA
As I got the NVIDIA graphics running I extracted the rom of my NVIDIA using gpu-z
Next try, still both GPUs installed (hostpci0: 27:00,pcie=1,x-vga=on,romfile=GP104.rom):
- VM1 boots up fine
- VM2 wont show graphical output
To verify that the system will boot with only the NVIDIA graphics installed I removed the AMD GPU and placed the NVIDIA in slot 1:
VM1:
runs fine, with or w/o the rom attached to the hostpci
VM2:
wont show graphical output neither way
The goal is obviously, to get the Windows VM (VM2) running with the NVIDIA graphics as a single GPU.
same with and w/o the rom-file attached.root@pve:~# tail -f /var/log/messages
Dec 19 14:18:16 pve pvedaemon[1373]: <root@pam> starting task UPID: pve:0000187C:00064A85:5A391198:qmstart:101:root@pam:
Dec 19 14:18:19 pve kernel: [ 4125.047104] vfio_ecap_init: 0000:26:00.0 hiding ecap 0x19@0x900
Dec 19 14:18:19 pve kernel: [ 4125.087100] vfio_ecap_init: 0000:03:00.0 hiding ecap 0x19@0x200
Dec 19 14:18:19 pve kernel: [ 4125.087105] vfio_ecap_init: 0000:03:00.0 hiding ecap 0x1e@0x400
Dec 19 14:18:19 pve kernel: [ 4125.088500] vfio-pci 0000:25:00.0: enabling device (0400 -> 0402)
Dec 19 14:18:20 pve kernel: [ 4126.119241] vfio_ecap_init: 0000:25:00.0 hiding ecap 0x19@0x200
Dec 19 14:18:20 pve kernel: [ 4126.121509] vfio-pci 0000:27:00.3: enabling device (0000 -> 0002)
Dec 19 14:18:23 pve pvedaemon[1373]: <root@pam> end task UPID: pve:0000187C:00064A85:5A391198:qmstart:101:root@pam: OK