I have managed to set up a windows VM within Proxmox. I gave it two physicals NVMEs , installed VirtIO drivers, gave it mouse nad keyboard. All is well, except for the GPU passtrough, which is the most important thing for me for making this VM. The VM is uselless for me if i cant pass the GPU.
Here are the settings i have made
1)My /etc/default/grub file has
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
2)/etc/modules are updated to contain
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
3)running
lists me more lines, one of them says
AMD-Vi: Interrupt remapping enabled
4)IOMMU Isolation.
The GPU i want to passthrough, (AMD VEGA Frontier Edition Watercooled) is in a single IOMMU group, as is its SoundCard, as can be seen by running these comannds:
49:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
4c:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 6863
64:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 6863
4c:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 6863
4c:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device aaf8
/sys/kernel/iommu_groups/66/devices/0000:4c:00.0
/sys/kernel/iommu_groups/67/devices/0000:4c:00.1
The GPU and its soundcard are in their IOMMU groups 66 and 67
I have verified that these devices are alone in their IOMMU groups.
5)Blacklisting Radeon
/etc/modprobe.d/blacklist.conf contains
blacklist radeon
6)My /etc/modprobe.d/vfio.conf contains
options vfio-pci ids=1002:6863,1002:aaf8
because
gives me
4c:00.0 0300: 1002:6863
4c:00.1 0403: 1002:aaf8
The Problem here is:
The second GPU which im going to try to pass to the second VM
64:00.0 (Second GPU Vega frontier)
If i run
gives me
64:00.0 0300: 1002:6863
64:00.1 0403: 1002:aaf8
Therefore my VM conf contains
With these settings, the VM starts, but i get no Image on the monitor attached to the GPU. Also th VNC console doesnt show any image, presumably because the Image is outputed through the GPU pcie device.
Questions:
1)Should i also be getting an image in the VNC console, if everything is passthroughed ok ?
2)Is there a problem that both of my GPUs, allthough they have different PCI address, have exactly similar IDs? Couldnt i input in the VFIO.CONF as IDs a complet PCI address with ID ?
Because that way i would write
options vfio-pci ids=4c:00.0/1002:6863,4c:00.1/1002:aaf8 (If that is the correct syntax - i dont know)
But i would need to specify exactly, because both cards have same id.
3)Bios is OVMF, EFI disk is in order, if i remove the PCI devices from the VM, the VM starts ok, and i get an image on the VNC Console, and windows boots, and i am in windows.
As soon as i add the pci devices, i get the problem above.
Perhaps i am doing something wrong ?
ANy help would be apreciated. I have managed to set everything i need (apart from the passtrough of the optical disc) and this GPU passtrough would be the last thing i would need to get working.
I sure hope it is possible.
Any ideeas are deeeeeeeeply apreciated. I have been trying to make these VMs happening for the past 3 Weeks, going on a month now, started with UNRAID, ESXi, and now PROXMOX: Decided to remain with Proxmox, as it seems to most comprehensive of all. Im only hopefull i will be able to make the VMs work with the GPU passthrough.
Here are the settings i have made
1)My /etc/default/grub file has
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
2)/etc/modules are updated to contain
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
3)running
Code:
dmesg | grep AMD-Vi
AMD-Vi: Interrupt remapping enabled
4)IOMMU Isolation.
The GPU i want to passthrough, (AMD VEGA Frontier Edition Watercooled) is in a single IOMMU group, as is its SoundCard, as can be seen by running these comannds:
Code:
lspci | grep VGA
4c:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 6863
64:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 6863
Code:
lspci | grep 4c
4c:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device aaf8
Code:
find /sys/kernel/iommu_groups/ -type l | grep 4c
/sys/kernel/iommu_groups/67/devices/0000:4c:00.1
The GPU and its soundcard are in their IOMMU groups 66 and 67
I have verified that these devices are alone in their IOMMU groups.
5)Blacklisting Radeon
/etc/modprobe.d/blacklist.conf contains
blacklist radeon
6)My /etc/modprobe.d/vfio.conf contains
options vfio-pci ids=1002:6863,1002:aaf8
because
Code:
lspci -n -s 4c:00
4c:00.0 0300: 1002:6863
4c:00.1 0403: 1002:aaf8
The Problem here is:
The second GPU which im going to try to pass to the second VM
64:00.0 (Second GPU Vega frontier)
If i run
Code:
lspci -n -s 64:00
64:00.0 0300: 1002:6863
64:00.1 0403: 1002:aaf8
Therefore my VM conf contains
Code:
hostpci0: 4c:00.0,pcie=1,x-vga=on
hostpci1: 4c:00.1,pcie=1
With these settings, the VM starts, but i get no Image on the monitor attached to the GPU. Also th VNC console doesnt show any image, presumably because the Image is outputed through the GPU pcie device.
Questions:
1)Should i also be getting an image in the VNC console, if everything is passthroughed ok ?
2)Is there a problem that both of my GPUs, allthough they have different PCI address, have exactly similar IDs? Couldnt i input in the VFIO.CONF as IDs a complet PCI address with ID ?
Because that way i would write
options vfio-pci ids=4c:00.0/1002:6863,4c:00.1/1002:aaf8 (If that is the correct syntax - i dont know)
But i would need to specify exactly, because both cards have same id.
3)Bios is OVMF, EFI disk is in order, if i remove the PCI devices from the VM, the VM starts ok, and i get an image on the VNC Console, and windows boots, and i am in windows.
As soon as i add the pci devices, i get the problem above.
Perhaps i am doing something wrong ?
ANy help would be apreciated. I have managed to set everything i need (apart from the passtrough of the optical disc) and this GPU passtrough would be the last thing i would need to get working.
I sure hope it is possible.
Any ideeas are deeeeeeeeply apreciated. I have been trying to make these VMs happening for the past 3 Weeks, going on a month now, started with UNRAID, ESXi, and now PROXMOX: Decided to remain with Proxmox, as it seems to most comprehensive of all. Im only hopefull i will be able to make the VMs work with the GPU passthrough.