I just finished migrating my Windows 11 gaming VM from Unraid to Proxmox. Along the way I encountered some obstacles. To hopefully save other people some time I'm writing a detailed guide here on how I set up my system.
Disclaimer: I am by no means an expert on this stuff. It is entirely possible that I made some mistakes in this setup. If you notice something that is clearly wrong or something that could be improved please add your suggestions in the comments.
ProxmoxVE Version
I am writing this guide using ProxmoxVE version 8.1.3.
Hardware
This guide works for the following hardware:
CPU: Ryzen 7950x
Mainboard: Gigabyte X670 Gaming X AX
GPU1: RTX A6000 (in main PCIe slot and not used for the Windows VM)
GPU2: RTX 3070 (in second PCIe slot and used for the Window VM)
Note: You have enable a bunch of virtualization options in the BIOS. Essentially enable everything setting that has "virtualization", "VT", or "IOMMU" in the name.
Checking IOMMU Groups
The first step is checking your IOMMU groups. Open a shell and run
pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
. Here,
{nodename}
refers to the name you gave to your proxmox server. This should give you a list of all your PCIe devices which their respective IOMMU group. Find the GPU you want to pass to your VM and check if there are any other devices in the same IOMMU group. If there are no other devices in the same IOMMU group you can skip the next step.
Splitting IOMMU Groups
Disclaimer: Splitting your IOMMU groups has some security implications! Please read up on it and decide if this is acceptable for your use case! Don't just blindly follow what some guy wrote on the internet!
So...your GPU was not in it's own IOMMU group. This can be fixed by adding
pcie_acs_override=downstream,multifunction
to the kernel parameters.
Most of the guides I found online tell you to edit the kernel parameters in
/etc/default/grub
. However, my installation of proxmox does not grub. Instead it uses systemd-boot. So in my case I had the kernel parameters to
/etc/kernel/cmdline
.
The
/etc/kernel/cmdline
file now looks like this for me:
In order for the change to take effect you need to run
update-initramfs -u
and then reboot the server.
Once the server is rebooted you can verify that the kernel parameters were actually added by running
cat /proc/cmdline
. If you don't see
pcie_acs_override=downstream,multifunction
in the output it did not work. Either you did something wrong or your installation might be using grub instead of systemd-boot.
Now run
pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
and verify that your GPU is in it's own IOMMU group. You might notice that your GPU was split into two devices - a VGA device and an audio device. Don't worry. This is fine. What is important is that no other devices share an IOMMU group with your GPU.
Getting the GPU ready for passthrough
In order for GPU passthrough to work you need to ensure two things. First, some vfio kernel modules need to be loaded, and second, the GPU driver itself must
not be loaded.
To make sure the vfio kernel modules are available add the three lines
to
/etc/modules
. Now run
update-initramfs -u
and reboot the server. Once the server is rebooted run
lsmod
and verify that the three modules are actually available.
To make sure that GPU drivers are not loaded you must blacklist them in
/etc/modprobe.d/
. I added the lines
to the file
/etc/modprobe.d/pve-blacklist.conf
. You could also create a separate .conf file in the same folder if you want to keep things separated. Now run
update-initramfs -u
and reboot the server. Once the server is rebooted run
lspci -nnk
and look for your GPU. If there is a line
Kernel driver in use: <some_driver>
your GPU still has a driver loaded. Add that driver to the blacklist, run
update-initramfs -u
, reboot, and check again. If the
Kernel driver in use: <some_driver>
line is no longer there you are good to go.
Create a VM
Since we are setting up a gaming VM we want to make sure that (1) the VM performs well and (2) the VM does not know it's a VM. The reason why we don't want the VM to know it's a VM is because some anti cheat software (like EasyAntiCheat) will not let you play on a VM. It goes without saying...please don't cheat in games.
Disclaimer: Not all of the following steps might be necessary for Window 11 to not know it's running in a VM. The following setting did, however, give me a VM which did not know it was a VM and the performance is nearly identical to a bare metal Windows 11 install. So I'm not gonna spend the time to test all possible combinations to figure out which settings are really needed.
Step 1: Get a
Windows 11 iso
Step2: Upload the iso to Proxmox using the WebGUI (Datacenter -> {nodename} -> local(nodename) -> ISOImages -> Upload
Step3: Click on "Create VM" (top right in the GUI)
Step4 (General): Give your VM a name and an ID. Also select the "Advanced" option at the bottom of the window. We'll need some advanced settings later.
Step5 (OS): Select the ISO you uploaded and select "Microsoft Windows" as the Guest OS type. Note: We do not need any VirtIO drivers!
Step6 (System):
- Graphics card: Default
- Machine: q35
- BIOS: OVMF (UEFI)
- Add EFI Disk: yes
- SCSI Controller: LSI 53C895A
- Qemu Agent: no
- Add TPM: yes
Step7 (Disks):
- Bus/Device: SATA
- Cache: Write back
Step8 (CPU):
- Sockets: 1
- Cores: However many cores you want
- Type: host
Step9 (Memory):
- Memory: however much memory you want
- Ballooning Device: no
Step10 (Network):
Step11: Confirm your settings, create the VM, but don't start it yet!
Note: The reason why we choose LSI for our SCSI controller, SATA for your disks, and the Intel E1000 network card is because we don't need any virtio drivers for any of those. In my experience, as soon as you add the Qemu gest agent or add any virtio drivers, Windows knows it's a VM.
Now, we add our GPU to the VM.
Select your VM in the menu on the left, then click on "Hardware". Now click "add" and select "PCI Device". Select "Raw Device" and find your GPU in the drop down menu. Make sure you select the GPU and not the audio controller! Tick the "Primary GPU" and "PCI-Express" boxes. For the "PCI-Express" box you need to select "Advanced" at the bottom of the window. Do not select the "All Functions" checkbox!
Repeat the process for the GPUs audio device but this time don't tick the "Primary GPU" checkbox.
Do not start the VM yet! We need some additional settings.
Run
dmidecode -t 0
and
dmidecode -t 1
. This gives you some information about your mainboard.
Navigate to your VM in the webGUI and select Options -> SMBIOS settings (type1). Enter as much information as you can find there. For me this is:
Then, add the following line at the top of the VM config file (
/etc/pve/qemu-server/<your_vmid>.conf
).
For me this looks like this:
Note: Don't forget the quotes for strings with spaces!
Finally, add the
hidden=1
option to the cpu. That is, change the line
to
That's it! You are now ready to install windows. Of course you will need a mouse and keyboard attached to the VM to do that. Both can be added to the VM in the "Hardware" tab. Select add -> USB Device, find your mouse and keyboard, and click on add. Now, start the VM.
Note: As soon as the VM starts you need to press a key to boot from the CD. So be ready to press some key!
Note2: If you don't want to deal with a Microsoft account
here is a guide on how to avoid that.
Once windows is installed, open the task manager and check whether or not windows thinks it a VM or not. If it thinks it's a VM it will it will say "Virtual machine: Yes". Let's hope it does not say that
Assuming Windows did not detect it is running in a VM press CTRL+R, and start "msinfo32". The entries for your systems Manufacturer and Model, and the BIOS Version/Date should match what you entered earlier. If it says anything am QEMU or any other VM related stuff there EasyAnitCheat will probably detect it (at least in my experience).
Assuming everything looks good you are now ready to install some games and give it a test.
USB passthrough
Passing through USB devices using the WebGUI does work but in my experience it can be a bit hit or miss with headsets and xbox controllers. Also, everything crashes if you accidentally unplug the USB device.
A better solution is passing through a whole USB controller (just like the GPU). So let's see how this can be done.
In theory it's the same as passing through the GPU. In practice it's way more work.
Step1: Figure out which USB ports are connected to the same root USB controller.
To do this run
lsusb -t
. This shows you all your usb root hubs with their associated ports. To figure out where each port is physically take some usb device, plug it into every physical port, and check each time where it shows up. Once you have happed all the ports, find a usb root hub that is in its own IOMMU group.
Note: If you haven't already you will probably need to split your IOMMU groups now. Splitting the IOMMU groups will change the IOMMU group of your GPU! So you might need to adjust it in the VM config.
Step2: Once you have identified a USB root hub that is in its own IOMMU group you pass it through to the VM just like the GPU. The only difference it that you do not check the "Primary GPU" and the "PCI-Express" check boxes.
Try to start your VM. If it starts you should be able to use the USB ports, which are associated with the USB root hub you passed through, just like a normal USB port.
In my case this would work if nothing was plugged into the USB port when the VM was started. If there was something plugged into the USB port when the VM was starting the whole system would crash, shut off, and then reboot. I think this is due to some issue with the kernel driver not unloading properly which causes the vfio-pci driver not to load. I found two solutions to the problem.
Solution 1: Before starting the VM manually unload the driver by running
echo "<PCI ID of the usb root hub>" > /sys/bus/usb/drivers/<the_driver>/unbind
. To check which driver is loaded for the usb root hub run
lspci -nnk
.
Solution 2: The nuclear option. Simply blacklist all the drivers which bind to the USB controller. In my case adding
to
/etc/modprobe.d/pve-blacklist.conf
fixed the issue.
Final thoughts
I hope this was helpful to at least one person struggling to get a VM up and running. If you notice any mistakes or think anything is unclear please leave a comment.