i am so sorry, my bad, that args was meant for Intel.

i dont have AMD.... sorryI
It's just changing Intel to AMD so the help is good, but since it won't even take the cpu line the args don't matter LOL. Others have had success with AMD so I'm hoping a CPU upgrade will do the trick.
 
Whenever I use this line of parameters, if I use hypervisor=off, [FONT=Microsoft YaHei]args: -cpu 'host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel'[/FONT]Windows will blue screen directly. The blue screen error is as follows. Did I do something wrong1703053167384.png
 
Whenever I use this line of parameters, if I use hypervisor=off, [FONT=Microsoft YaHei]args: -cpu 'host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=intel'[/FONT]Windows will blue screen directly. The blue screen error is as follows. Did I do something wrongView attachment 60105
I am seeing this as well - but for me it appears to be the line

cpu: host,hidden=1

I have to use a a CPU type of x86-64-v2-AES - but that seems to identify the machine as a VM to windows.

I am running an older AMD Ryzen 7 2700 CPU - not an Intel so this could be a little different - but seems related to Windows and CPU capability identification.
 
我也看到了这一点——但对我来说,这似乎是一条线

CPU:主机,隐藏=1

我必须使用 x86-64-v2-AES 的 CPU 类型 - 但这似乎将机器识别为 Windows 的 VM。

我运行的是较旧的 AMD Ryzen 7 2700 CPU——不是 Intel,所以这可能有点不同——但似乎与 Windows 和 CPU 功能识别有关。
我将同一台虚拟机备份到另一台使用 AMD R5 3600 的主机,使用相同的参数后,它正常启动并正常运行。但 AMD 主机没有使用 ZFS 作为 root。不知道有没有对客系统有什么影响。
 
+1 I am also getting the same bluescreen. After starting the VM I get the "Press any key too boot from CD or DVD" text, I press a key, am shown the Proxmox logo for a second or two and insta-blue screen. Never been able to actually get the OS installed.

I am using Intel CPUs (seems most above are on AMD). Xeon E5-2630L v2's to be specific.
 
Following here since I am trying to do the same.

Adding this Line to args: -smbios type=0,vendor="American Megatrends Inc.",version=1903,date="08/30/2023" gives me a QEUMU error on start up. If I delete this with the kvm off etc. It starts up but is still showing a virtual machine. Any ideas?

swtpm_setup: Not overwriting existing state file.
kvm: type=0,vendor=American Megatrends Inc.,version=1903,date=08/30/2023: Could not open 'type=0,vendor=American Megatrends Inc.,version=1903,date=08/30/2023': No such file or directory
stopping swtpm instance (pid 53071) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1
Hey bro I am suffering the same shit as well; my error code being

swtpm_setup: Not overwriting existing state file.
kvm: ../hw/pci/pci.c:1637: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
stopping swtpm instance (pid 19091) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1

I am trying to PCI passthrough my 5700xt to my windows VM, if you have any idea how to resolve this I'll thank you so much
 
When exactly does it crash? When you boot from the ISO or once windows is installed and you try to boot the windows install? Do you get any other error messages that might help debug the problem?



Could you post the full "args" line you used? Or maybe the full config?
OK - I have things working now!

It was most likely related to to the OLD AMD Ryzen 5 1700 CPU I was using... I knew I wanted to upgrade the CPU - just waiting for the right sale. I picked up and installed an AMD Ryzen 9 5900x and now everything is working fine as per your excellent instructions!

Thanks for documenting this as it made my experimentation and implementation so much easier!
 
Hey I am having the BSOD issue after updating from Windows 22H2 to Windows 23H2! So the windows update has definitely broke something for us all! Any ideas?
 
iv been using ur config just fine, but today after i shut down my proxmox computer, once i booted again, i started to get a BSOD on my vm using windows11 which is SYSTEM THREAT EXCEPTION NOT HANDLE, after a few trials n errors i narrow it down to remove the -hypervisor flag from the args line, but when i do that, windows detects the system as a virtual machine, i dont know why this started to happen all of a sudden, if it was working just fine, can u help me get it back to normal? :-( @MichaelBernasconi
 
Hey bro I am suffering the same shit as well; my error code being

swtpm_setup: Not overwriting existing state file.
kvm: ../hw/pci/pci.c:1637: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
stopping swtpm instance (pid 19091) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1

I am trying to PCI passthrough my 5700xt to my windows VM, if you have any idea how to resolve this I'll thank you so much
Did you manage to solve this problem? I've had the same problem for a week and I can't fix it.
 
@MichaelBernasconi First off thank you for this thorough tutorial. it almost went flawless for me. My setup is:
Ryzen 7 5700G
MSI x570 Gaming Edge Wifi
XFX rx 6950xt

After installation i checked device manager and there was an error code 12 for my gpu and I have no audio device available. tried installing amd gpu auto detect drivers and it failed. Rebooted after Windows updates and error 12 disappeared but still have no audio. Any help would greatly appreciated. Thank you
 
Thank you so much for this tutorial! It worked flawlessly for me. Although I have not tested any anti-cheat software yet, Windows indeed does not recognize it is a VM and everything works (mostly) as expected. I'm using Windows 11 64-bit on Proxmox VE v8.1.4 with an RTX 2070 Super GPU and AMD Ryzen 5800X.

A number of additional observations:
  1. Boot time is inconsistent. Sometimes Windows boots near-instantaneously, sometimes the Proxmox logo seems to hang for a while before going to a black screen and only then booting into Windows (indicating by loading circle under Proxmox logo). In rare cases the VM hangs completely during boot with 100% CPU usage (8 cores). I have not been able to reliably reproduce the patterns, but it looks like the hardware configuration has something to do with it (other PCIe passthorugh devices).
  2. I have 0 issues with USB passthrough. Whether I passthrough a device, port, or the whole controller everything works as expected. In particular, everything keeps working when unplugging. The only exception is that sometimes the VM hangs when a driver install resets the USB device (noticed once when passing through a device by ID).
  3. After Windows has been installed, I can re-enable the IOMMU groups and passthrough only the GPU as Primary GPU, PCI-Express, with All functions. This works as expected and does show Windows it is a VM as far as I can tell. The GPU audio device is also automatically enabled and appears to be working.
  4. I have found no way to wake the Windows VM from sleep other than through the ProxmoxVE GUI. This is a little annoying, but not game-breaking. PVE GUI control of ACPI events (Shutdown, Hibernate, etc.) seems to work well out-of-the-box.
  5. Although the benchmarks check out, there is a degree of latency to the system that is hard to place. It feels as if the CPU is pegged at 100% sometimes (it is not, and could hardly be on a 5800X when doing nothing). It feels like it could also be an IO bottleneck, but the VM disk is stored on a Samsung 980 Pro 1TB NVMe SSD, so that shouldn't be the case.
  6. This method (not sure if the Intel E1000 virtual adapter is to blame) incurs a serious network performance penalty for the VM. I get about 50% of my normal internet bandwidth (500 Mbit/s instead of 1Gbit/s down, upload is better but very inconsistent) and 2ms-10ms additional latency. I tested this with fast.com.
Happy to hear other people's experiences.

EDIT: The IOMMU re-grouping magically stopped working. Out of the blue starting the Windows VM would nuke the physical network interface of PVE, requiring a hard reset. I cannot get it to work anymore with the IOMMU groups intact. Nothing changed on my end except a couple of hours of internet outage. Weird...
 
Last edited:
@MichaelBernasconi First off thank you for this thorough tutorial. it almost went flawless for me. My setup is:
Ryzen 7 5700G
MSI x570 Gaming Edge Wifi
XFX rx 6950xt

After installation i checked device manager and there was an error code 12 for my gpu and I have no audio device available. tried installing amd gpu auto detect drivers and it failed. Rebooted after Windows updates and error 12 disappeared but still have no audio. Any help would greatly appreciated. Thank you

Are you blacklisting the audio device module as well? You need to do that as well. Chek the Configuring the GPU for passthrough section at this link for instructions.
 
It's fixed with
echo 1 > /sys/module/kvm/parameters/ignore_msrs
this is meant to be ran in the node shell, correct? Sadly this has not worked for me

Downloading a new ISO (old was 23H1, new is 23H2) fixed it for me
 
Last edited:
Great topic, thank you. I ran an all-in-one (UnRaid, Docker, Gaming machine) with UnRaid/Windows VM for about 5 years. Hardly even noticed any performance hit in games and never had any issue with games that don't like VM. When I did an upgrade to Windows 11, a 6700XT, and an 5800X3D, my setup went to sh*t performance-wise and I ended up giving up and just running a bare metal machine and building a separate AM4 UnRaid machine (with ECC ram). I've since worked out that the issues were primarily when I upgraded to Windows 11 (didn't even do a clean install, lol). With the announcement of the AM5 9950X, I'm looking to jump back into the all-in-one goodness. This time it will be Promox + Windows VM + Ubuntu (docker). As much as I love gaming, I just can't justify a machine wasting away with a single purpose.
 
I just finished migrating my Windows 11 gaming VM from Unraid to Proxmox. Along the way I encountered some obstacles. To hopefully save other people some time I'm writing a detailed guide here on how I set up my system.

Disclaimer: I am by no means an expert on this stuff. It is entirely possible that I made some mistakes in this setup. If you notice something that is clearly wrong or something that could be improved please add your suggestions in the comments.

ProxmoxVE Version
I am writing this guide using ProxmoxVE version 8.1.3.

Hardware
This guide works for the following hardware:
CPU: Ryzen 7950x​
Mainboard: Gigabyte X670 Gaming X AX​
GPU1: RTX A6000 (in main PCIe slot and not used for the Windows VM)​
GPU2: RTX 3070 (in second PCIe slot and used for the Window VM)​
Note: You have enable a bunch of virtualization options in the BIOS. Essentially enable everything setting that has "virtualization", "VT", or "IOMMU" in the name.

Checking IOMMU Groups
The first step is checking your IOMMU groups. Open a shell and run pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist "". Here, {nodename} refers to the name you gave to your proxmox server. This should give you a list of all your PCIe devices which their respective IOMMU group. Find the GPU you want to pass to your VM and check if there are any other devices in the same IOMMU group. If there are no other devices in the same IOMMU group you can skip the next step.

Splitting IOMMU Groups
Disclaimer: Splitting your IOMMU groups has some security implications! Please read up on it and decide if this is acceptable for your use case! Don't just blindly follow what some guy wrote on the internet!

So...your GPU was not in it's own IOMMU group. This can be fixed by adding pcie_acs_override=downstream,multifunction to the kernel parameters.
Most of the guides I found online tell you to edit the kernel parameters in /etc/default/grub. However, my installation of proxmox does not grub. Instead it uses systemd-boot. So in my case I had the kernel parameters to /etc/kernel/cmdline.

The /etc/kernel/cmdline file now looks like this for me:

In order for the change to take effect you need to run update-initramfs -u and then reboot the server.

Once the server is rebooted you can verify that the kernel parameters were actually added by running cat /proc/cmdline. If you don't see pcie_acs_override=downstream,multifunction in the output it did not work. Either you did something wrong or your installation might be using grub instead of systemd-boot.

Now run pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist "" and verify that your GPU is in it's own IOMMU group. You might notice that your GPU was split into two devices - a VGA device and an audio device. Don't worry. This is fine. What is important is that no other devices share an IOMMU group with your GPU.

Getting the GPU ready for passthrough
In order for GPU passthrough to work you need to ensure two things. First, some vfio kernel modules need to be loaded, and second, the GPU driver itself must not be loaded.

To make sure the vfio kernel modules are available add the three lines

to /etc/modules. Now run update-initramfs -u and reboot the server. Once the server is rebooted run lsmod and verify that the three modules are actually available.

To make sure that GPU drivers are not loaded you must blacklist them in /etc/modprobe.d/. I added the lines

to the file /etc/modprobe.d/pve-blacklist.conf. You could also create a separate .conf file in the same folder if you want to keep things separated. Now run update-initramfs -u and reboot the server. Once the server is rebooted run lspci -nnk and look for your GPU. If there is a line Kernel driver in use: <some_driver> your GPU still has a driver loaded. Add that driver to the blacklist, run update-initramfs -u, reboot, and check again. If the Kernel driver in use: <some_driver> line is no longer there you are good to go.

Create a VM
Since we are setting up a gaming VM we want to make sure that (1) the VM performs well and (2) the VM does not know it's a VM. The reason why we don't want the VM to know it's a VM is because some anti cheat software (like EasyAntiCheat) will not let you play on a VM. It goes without saying...please don't cheat in games.

Disclaimer: Not all of the following steps might be necessary for Window 11 to not know it's running in a VM. The following setting did, however, give me a VM which did not know it was a VM and the performance is nearly identical to a bare metal Windows 11 install. So I'm not gonna spend the time to test all possible combinations to figure out which settings are really needed.

Step 1: Get a Windows 11 iso
Step2: Upload the iso to Proxmox using the WebGUI (Datacenter -> {nodename} -> local(nodename) -> ISOImages -> Upload
Step3: Click on "Create VM" (top right in the GUI)
Step4 (General): Give your VM a name and an ID. Also select the "Advanced" option at the bottom of the window. We'll need some advanced settings later.
Step5 (OS): Select the ISO you uploaded and select "Microsoft Windows" as the Guest OS type. Note: We do not need any VirtIO drivers!
Step6 (System):
  • Graphics card: Default
  • Machine: q35
  • BIOS: OVMF (UEFI)
  • Add EFI Disk: yes
  • SCSI Controller: LSI 53C895A
  • Qemu Agent: no
  • Add TPM: yes
Step7 (Disks):
  • Bus/Device: SATA
  • Cache: Write back
Step8 (CPU):
  • Sockets: 1
  • Cores: However many cores you want
  • Type: host
Step9 (Memory):
  • Memory: however much memory you want
  • Ballooning Device: no
Step10 (Network):
  • Model: Intel E1000
Step11: Confirm your settings, create the VM, but don't start it yet!

Note: The reason why we choose LSI for our SCSI controller, SATA for your disks, and the Intel E1000 network card is because we don't need any virtio drivers for any of those. In my experience, as soon as you add the Qemu gest agent or add any virtio drivers, Windows knows it's a VM.

Now, we add our GPU to the VM.
Select your VM in the menu on the left, then click on "Hardware". Now click "add" and select "PCI Device". Select "Raw Device" and find your GPU in the drop down menu. Make sure you select the GPU and not the audio controller! Tick the "Primary GPU" and "PCI-Express" boxes. For the "PCI-Express" box you need to select "Advanced" at the bottom of the window. Do not select the "All Functions" checkbox!
Repeat the process for the GPUs audio device but this time don't tick the "Primary GPU" checkbox.

Do not start the VM yet! We need some additional settings.
Run dmidecode -t 0 and dmidecode -t 1. This gives you some information about your mainboard.
Navigate to your VM in the webGUI and select Options -> SMBIOS settings (type1). Enter as much information as you can find there. For me this is:

Then, add the following line at the top of the VM config file (/etc/pve/qemu-server/<your_vmid>.conf).

For me this looks like this:

Note: Don't forget the quotes for strings with spaces!
Finally, add the hidden=1 option to the cpu. That is, change the line

to


That's it! You are now ready to install windows. Of course you will need a mouse and keyboard attached to the VM to do that. Both can be added to the VM in the "Hardware" tab. Select add -> USB Device, find your mouse and keyboard, and click on add. Now, start the VM.
Note: As soon as the VM starts you need to press a key to boot from the CD. So be ready to press some key!
Note2: If you don't want to deal with a Microsoft account here is a guide on how to avoid that.

Once windows is installed, open the task manager and check whether or not windows thinks it a VM or not. If it thinks it's a VM it will it will say "Virtual machine: Yes". Let's hope it does not say that ;)
Assuming Windows did not detect it is running in a VM press CTRL+R, and start "msinfo32". The entries for your systems Manufacturer and Model, and the BIOS Version/Date should match what you entered earlier. If it says anything am QEMU or any other VM related stuff there EasyAnitCheat will probably detect it (at least in my experience).
Assuming everything looks good you are now ready to install some games and give it a test.

USB passthrough
Passing through USB devices using the WebGUI does work but in my experience it can be a bit hit or miss with headsets and xbox controllers. Also, everything crashes if you accidentally unplug the USB device.
A better solution is passing through a whole USB controller (just like the GPU). So let's see how this can be done.
In theory it's the same as passing through the GPU. In practice it's way more work.

Step1: Figure out which USB ports are connected to the same root USB controller.
To do this run lsusb -t. This shows you all your usb root hubs with their associated ports. To figure out where each port is physically take some usb device, plug it into every physical port, and check each time where it shows up. Once you have happed all the ports, find a usb root hub that is in its own IOMMU group.
Note: If you haven't already you will probably need to split your IOMMU groups now. Splitting the IOMMU groups will change the IOMMU group of your GPU! So you might need to adjust it in the VM config.

Step2: Once you have identified a USB root hub that is in its own IOMMU group you pass it through to the VM just like the GPU. The only difference it that you do not check the "Primary GPU" and the "PCI-Express" check boxes.

Try to start your VM. If it starts you should be able to use the USB ports, which are associated with the USB root hub you passed through, just like a normal USB port.
In my case this would work if nothing was plugged into the USB port when the VM was started. If there was something plugged into the USB port when the VM was starting the whole system would crash, shut off, and then reboot. I think this is due to some issue with the kernel driver not unloading properly which causes the vfio-pci driver not to load. I found two solutions to the problem.

Solution 1: Before starting the VM manually unload the driver by running echo "<PCI ID of the usb root hub>" > /sys/bus/usb/drivers/<the_driver>/unbind. To check which driver is loaded for the usb root hub run lspci -nnk.

Solution 2: The nuclear option. Simply blacklist all the drivers which bind to the USB controller. In my case adding

to /etc/modprobe.d/pve-blacklist.conf fixed the issue.

Final thoughts
I hope this was helpful to at least one person struggling to get a VM up and running. If you notice any mistakes or think anything is unclear please leave a comment.
Thanks man, this is like treasure for me.
I just started the VM based on your guide lines. (even lucky me, I had almost same hardware configuration, haha)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!