[TUTORIAL] PCI/GPU Passthrough on Proxmox VE 8 : Windows 10 & 11

asded

Member
Sep 2, 2022
30
55
23
France
asded.fr
This article is the second in a series of five articles dealing with the installation and configuration of VMs (Linux, Windows, macOS and BSD) in PCI Passthrough on Proxmox VE 8.

  • Part 0-4 PCI/GPU Passthrough on Proxmox VE Installation and Setup (Part. 00x04)
  • Part 1-4 PCI/GPU Passthrough on Proxmox VE: Windows 10.11 (Part. 01x04)
  • Part 2-4 PCI/GPU Passthrough on Proxmox VE: Debian 12 (Part. 02x04) soon ...
  • Part 3-4 PCI/GPU Passthrough on Proxmox VE: OpenBSD 7.2 (Part. 03x04) soon ...
  • Part 4-4 PCI/GPU Passthrough on Proxmox VE: macOS (Part. 04x04) soon ...
Well, after seeing how to configure our Proxmox hypervisor, now let's move on to setting our first VMs in Windows 10 and 11.

Recovering ISO images (Windows and VirtIO)

We will need an installation image of Windows 10 of course, but that's not all, in order to ensure better compatibility of our VM with our host we will also need the VirtIO drivers.
Creating the Windows 10 VM

Now let's move on to creating our first VM.

Bash:
 qm create 100\
     --name win-100\
     --agent 1\
     --memory 8192\
     --bios ovmf\
     --sockets 1 --cores 4\
     --cpu host\
     --net0 virtio,bridge=vmbr0\
     --scsihw virtio-scsi-single\
     --boot order='ide2;ide0;scsi0'\
     --ostype win10\
     --efidisk0 local-lvm:0\
     --scsi0 local-lvm:150\
     --ide0 PVE1:iso/virtio-win-0.1.229.iso,media=cdrom\
     --cdrom PVE1:iso/Win10_22H2_French_x64.iso\
     --machine q35\
     --hostpci0 0000:01:00.0,pcie=1\
     --hostpci1 0000:01:00.1,pcie=1
Small explanation on the important points of this command:

  • scsi0 local-lvm:150 corresponds to the destination disk (150 GB) for installing Windows.
  • cdrom PVE1:iso/Win10_22H2_French_x64.iso The location of the Windows 10 ISO, here defined as virtual CDRom. In my case I mounted in Proxmox an NFS share from my NAS, so as not to have to store all my ISOs on the hypervisor, the PATH will therefore have to be modified in your case by local:iso/Win10_22H2_French_x64.iso.
  • ide0 PVE1:iso/virtio-win-0.1.229.iso,media=cdrom The location of the ISO of VirtIO drivers, necessary for the installation of Windows, again it will be necessary to modify the path by ide0 local:iso/virtio-win-0.1.229.iso,media=cdrom.
  • boot order='ide2;ide0;scsi0' In the boot order, the Windows installation image should be placed first.
  • hostpci0 0000:01:00,pcie=1 & hostpci0 0000:01:01,pcie=1 the PCI slot of the GPU (two entries, because the graphics card must also manage sound, you can define a single hostpci entry but the “all function” parameter must be activated).
As I have a KVM switch to share my mouse and my keyboard between my thinkpad and the VMs hosted on the hypervisor, I will have to add the IDs of the two devices to my VM:

On the Proxmox host, I search the list of available USB devices:

Bash:
lsusb | grep -E "Logitech|Lite-On"
Bus 001 Device 019: ID 046d:c08b Logitech, Inc. G502 SE HERO Gaming Mouse
Bus 001 Device 018: ID 04ca:007d Lite-On Technology Corp. USB wired keyboard
Then, all that remains is to add the two IDs to my VM:

Bash:
qm set 100 --usb0 046d:c08b,usb3=1
qm set 100 --usb1 04ca:007d,usb3=1

A few details about the CPU flags to use

Depending on the use case of your VM and your hardware configuration, you may need to make changes to your CPU flags, in your QEMU configuration.

However, before making any changes to your VM, keep the following points in mind:

  • Your VM already has predefined CPU flags, depending on the type of CPU selected (host, kvm63, qemu63, …).
For example, for my Windows VM this is what I get:

Code:
-cpu
'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt'
You can ensure this yourself for your VM, with:

Bash:
qm showcmd VMID | grep --color -e "-cpu '\S*'"
This is therefore the first step to perform before any modification of your QEMU configuration.

  • On the other hand, defining new CPU flags for example with the following command, will not completely erase the default configuration, but will adapt it according to your new parameters:
Bash:
qm set VMID --cpu host,hidden=1,flags=+pcid
A command of this type would have had the effect on my default configuration of adding two additional flags (kvm=off and +pcid):

Code:
-cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt,+pcid
I take this example on purpose, because I often see this modification recommended in the various guides, most often coupled with this other modification:

Bash:
qm set VMID --args '-cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'
You will notice that there are a number of unnecessary flags, as they are already present in the default configuration. Let's take it point by point:

Code:
qm set VMID --cpu host,hidden=1,flags=+pcid
  • hidden=1: This is not a necessary parameter for passthrough per se, but may be added if, in addition to your PCI passthrough configuration, you want to do “nested virtualization” such as to be able to use hyper-V or WSL from your Windows VM.
  • flags=+pcid: May improve performance of PCI passthrough configuration, but only applies to Intel.
processors
Code:
qm set VMID --args '-cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'

  • +kvm_pv_unhalt: No need to redefine because already present in the default configuration.
  • +kvm_pv_eoi: No need to redefine because already present in the default configuration.
  • hv_vendor_id=NV43FIX: This option allows you to specify the hypervisor vendor ID. This may help resolve some compatibility issues with NVIDIA GPUs.
  • kvm=off: No need to redefine because already present in the first command “hidden=1”.
In my case, I wouldn't need to add any additional flags.

Installing Windows 10

All that remains is to start our VM, and go through the Proxmox noVNC console to start installing Windows 10.

Bash:
qm start 100
When selecting the destination disk, the installer will not find any disks available. It will then suffice to load the corresponding driver from the CDROM VirtIO:

Code:
CDROM (VirtIO) > vioscsi > w10 > amd64
Apart from this small subtlety, the rest of the installation remains unchanged, at least until the OOBE screen of Windows 10. Once there, you will have to switch to “Audit” mode with CTRL + SHIFT + F3.

This “Audit” mode creates a temporary user for us, the time to prepare the system with the graphics drivers, VirtIO, and the QEMU drivers/agent.

Windows will then restart the computer and a sysprep window will appear, you can close it for now.

We can then install the rest of the VirtIO drivers, the QEMU agent, and don't forget to update the Windows system and install the latest necessary drivers.

As I have a KVM switch to share my mouse and my keyboard between my thinkpad and the VMs hosted on the hypervisor, I will have to add the IDs of the two devices to my VM:

On the Proxmox host, I search the list of available USB devices:


Bash:
lsusb | grep -E "Logitech|Lite-On"
Bus 001 Device 019: ID 046d:c08b Logitech, Inc. G502 SE HERO Gaming Mouse
Bus 001 Device 018: ID 04ca:007d Lite-On Technology Corp. USB wired keyboard
Then, all that remains is to add the two IDs to my VM:

Bash:
qm set 100 --usb0 046d:c08b,usb3=1
qm set 100 --usb1 04ca:007d,usb3=1
As I will access this desktop directly from the HDMI display of the graphics card, so I no longer need the Proxmox console, I also take this opportunity to delete the installation disks, and redefine the boot order:

Bash:
qm set 100 --vga none
qm set 100 --delete ide0
qm set 100 --delete cdrom
qm set 100 --boot order='scsi0'
Transformation of our VM into a Windows template

Now let's normalize our Windows installation. From our Windows VM enter this command:

code_language.shell:
c:\windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown

  • /oobe Will force Windows to boot into its install mode (like a fresh install).
  • /generalize Remove data specific to our temporary session.
Once Sysprep has finished its operations it will automatically shut down our VM, all we have to do is convert to a template, from proxmox with the following command:

Bash:
qm template 100
Deployment of new VMs

Now if I want to deploy a new Windows VM (preconfigured) I simply clone my template with:

Bash:
qm clone 100 101 --full --name win-101
At startup, you will find yourself in front of the system configuration screen (OOBE) asking you to define a new user, and that's it, you can use your new desktop without additional configurations.

Regarding Windows 11

The procedure is relatively identical, the images to recover:
For creating the VM.

Bash:
 qm create 102\
     --name win-102\
     --agent 1\
     --memory 8192\
     --bios ovmf\
     --sockets 1 --cores 4\
     --cpu host\
     --net0 virtio,bridge=vmbr0\
     --scsihw virtio-scsi-single\
     --boot order='ide2;ide0;scsi0'\
     --ostype win11\
     --efidisk0 local-lvm:0\
     --tpmstate0 local-lvm:0,version=v2.0\
     --scsi0 local-lvm:150\
     --ide0 PVE1:iso/virtio-win-0.1.229.iso,media=cdrom\
     --cdrom PVE1:iso/Win11_French_x64v1.iso\
     --machine q35\
     --hostpci0 0000:01:00.0,pcie=1\
     --hostpci1 0000:01:00.1,pcie=1 \
     --usb0 046d:c08b,usb3=1\
     --usb1 04ca:007d,usb3=1

  • tpmstate0 local-lvm:0,version=v2.0 adds the famous TPM chip, necessary for Windows 11.
  • ostype win11 be sure to change OS type
Concerning the CPU flags in the QEMU configuration the same advice applies, as for a virtual machine under Windows 10.

During installation, when selecting the destination disk, load the driver for Windows 11 from the VirtIO virtual CDROM:

Code:
CDROM (VirtIO) > vioscsi > w11 > amd64
At the end of the installation after installing the VirtIO drivers, the QEMU agent, the system updates and the latest necessary drivers. I can switch to windows image normalization with Sysprep, and from the hypervisor convert our VM and template.

My original article (FR) : https://asded.gitlab.io/post/2023-07-20-pci-passthrough-proxmox-14/
 
  • Like
Reactions: Thorvi

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!