Windows 2016 CPU Hot plug support.

scott.ehas

Active Member
Oct 5, 2017
11
0
41
37
Hello All,

I am trying to get CPU Hotplugging support working on Windows 2016. The VM has the latest updates, drivers, and qemu guest agent. I have hotplugging enabled on the Proxmox 5.2 end with Numa & vCPU cores enabled. Do I need any special configurations on the guest OS side to get this working?

Memory hotplugging worked out of the box.
 
I too had issues with CPU hotplugging on Proxmox inside a Windows Server 2016 guest. Getting it working required fixing some things inside the guest. My digging actually found a bug reported to Redhat for QEMU that is really an ACPI bug in Server 2016 (and Win10). I'll refer to this link - https://bugzilla.redhat.com/show_bug.cgi?id=1377155. Essentially MS has non-standard compliant ACPI configuration out of the box, but can be fixed fairly easily to work with QEMU/Proxmox. Essentially because of the ACPI issue, Windows is not able to see any CPUs not there at boot, AND actually NO CPU's show up in the device manager. AND you get an error'd "HID Button over Interrupt Driver" device in the device manager. This process fixes CPU hotplugging and fixes the error'd device in device manager.

I actually wrote a script to automate this process as well. Pretty simple bat script. To run the script you need two executables from Microsoft that are not included in Windows. You will need to get psexec.exe from sysinternals to run reg command in the system context to delete the bad regkey's. And you'll need to get devcon.exe from the Windows Driver Development Kit (WDK). To avoid installing the whole kit, which you likely don't need, follow this link and do the administrative install which just downloads all the kit components. When you get to the point of extracting the MSI, the correct one is Windows Driver Kit-x86_en-us.msi not the one mentioned in the link which actually doesn't exist as the post is pretty ancient.

Run the script or do the steps manually and reboot the guest. CPU hotplugging (at least to add cores) will then work.


Script is here:

Code:
@echo offecho
echo "removing bad device"
\\fileserver\shares\Software\_Scripts\fix_acpi_proxmox\devcon.exe remove "ACPI\VEN_ACPI&DEV_0010"
pause
echo "removing 1st regkey"
\\fileserver\shares\Software\_Scripts\SysinternalsSuite\PsExec.exe /accepteula /s reg delete "HKLM\SYSTEM\DriverDatabase\DriverPackages\hidinterrupt.inf_amd64_d01b78dcb2395f49\Descriptors\ACPI\ACPI0010" /f
pause
echo "removing 2nd regkey"
\\fileserver\shares\Software\_Scripts\SysinternalsSuite\PsExec.exe /accepteula /s reg delete "HKEY_LOCAL_MACHINE\SYSTEM\DriverDatabase\DeviceIds\ACPI\ACPI0010" /f
pause
echo rebooting
shutdown /r /t 30


EDIT 5 March, 2019: As per a comment in the redhat bugzilla link, VCPU hotplug also requires the guest to be assigned MORE than 2GB of ram. A guest configured with 2.0 GB of ram or less will fail to VCPU hotplug, even with the above changes implemented. Increasing to anything over 2.0 GB (even 2049 MB) will allow the hotplug to work in conjunction with the above fix.
 
Last edited:
I too had issues with CPU hotplugging on Proxmox inside a Windows Server 2016 guest. Getting it working required fixing some things inside the guest. My digging actually found a bug reported to Redhat for QEMU that is really an ACPI bug in Server 2016 (and Win10). I'll refer to this link - https://bugzilla.redhat.com/show_bug.cgi?id=1377155. Essentially MS has non-standard compliant ACPI configuration out of the box, but can be fixed fairly easily to work with QEMU/Proxmox. Essentially because of the ACPI issue, Windows is not able to see any CPUs not there at boot, AND actually NO CPU's show up in the device manager. AND you get an error'd "HID Button over Interrupt Driver" device in the device manager. This process fixes CPU hotplugging and fixes the error'd device in device manager.

I actually wrote a script to automate this process as well. Pretty simple bat script. To run the script you need two executables from Microsoft that are not included in Windows. You will need to get psexec.exe from sysinternals to run reg command in the system context to delete the bad regkey's. And you'll need to get devcon.exe from the Windows Driver Development Kit (WDK). To avoid installing the whole kit, which you likely don't need, follow this link and do the administrative install which just downloads all the kit components. When you get to the point of extracting the MSI, the correct one is Windows Driver Kit-x86_en-us.msi not the one mentioned in the link which actually doesn't exist as the post is pretty ancient.

Run the script or do the steps manually and reboot the guest. CPU hotplugging (at least to add cores) will then work.


Script is here:

Code:
@echo offecho
echo "removing bad device"
\\castlefiles.transmit.local\shares2\Software\Castle_Scripts\fix_acpi_proxmox\devcon.exe remove "ACPI\VEN_ACPI&DEV_0010"
pause
echo "removing 1st regkey"
\\castlefiles\shares2\Software\Downloads\SysinternalsSuite\PsExec.exe /accepteula /s reg delete "HKLM\SYSTEM\DriverDatabase\DriverPackages\hidinterrupt.inf_amd64_d01b78dcb2395f49\Descriptors\ACPI\ACPI0010" /f
pause
echo "removing 2nd regkey"
\\castlefiles\shares2\Software\Downloads\SysinternalsSuite\PsExec.exe /accepteula /s reg delete "HKEY_LOCAL_MACHINE\SYSTEM\DriverDatabase\DeviceIds\ACPI\ACPI0010" /f
pause
echo rebooting
shutdown /r /t 30

Tried the suggested workaround, but it didn't work.
still when I add more cores to CPU while the VM is running, it becomes red and will be pending till next restart.
 
Tried the suggested workaround, but it didn't work.
still when I add more cores to CPU while the VM is running, it becomes red and will be pending till next restart.

Did you also go into the options on the VM and add CPU to the hotplug list?

upload_2018-10-7_11-39-58.png
 
Also, did you have the error'd "HID Button over Interrupt Driver" device in the device manager and now it's gone?
 
Are you able to see CPU devices in Device Manager, one for each core the VM booted with? If yes, if you delete them and scan for hardware changes, do they come back?
 
Also, how do you have the CPUs configured for the VM? I ask because, there is correct way and wrong way to configure them to use hotplugging.

upload_2018-10-8_13-3-19.png

Here's an example of how to do it. 'Cores' needs to be set to the max number of CPU cores the VM will ever be able to use. This number does not directly impact the number of virtual cores the VM boots with, it is the maximum number of cores the VM will potentially have access to without being shutdown. VCPUs will be set to the actual number of cores the VM should have access to on boot. To hot plug more cores, that is the setting to be increased, up to the number listed in 'Cores'. The above example VM would start with 4 cores, and be able to hot plug an additional 4 virtual cores while running.

edited for clarity
 
Also, how do you have the CPUs configured for the VM? I ask because, there is correct way and wrong way to configure them to use hotplugging.

View attachment 8360

Here's an example of how to do it. 'Cores' needs to be set to the max number of CPU cores the VM will ever be able to use. This number does not directly impact the number of virtual cores the VM boots with, it is the maximum number of cores the VM will potentially have access to without being shutdown. VCPUs will be set to the actual number of cores the VM should have access to on boot. To hot plug more cores, that is the setting to be increased, up to the number listed in 'Cores'. The above example VM would start with 4 cores, and be able to hot plug an additional 4 virtual cores while running.

edited for clarity
I tied it, it worked but the moment I increased the vCPU cores, the VM got hunged and rebooted :(
It seems Proxmox and Windows guests are not best buddies.
 
can we have an update from Proxmox staff, is anyone planning to fix this issue?

It's really not a Proxmox problem. Even happened in VMWare - https://vinfrastructure.it/2018/05/windows-server-2016-reboot-after-hot-adding-cpu-in-vsphere-6-5/

You seem to be configured correctly and know how to actually add vCPUs now, you weren't previously configured right or weren't doing it correct previously. Now make sure your Server 2016 install is fully up to date and try again.
 
I'm not trying to be the only Naysayer
But it's still not working for none of my Windows VMs.
I have tried to upgrade the latest windows update, Double checked QEMU, etc...

Have you managed to fix this? I am currently having the same problem.

Update: found solution
https://bugzilla.redhat.com/show_bug.cgi?id=1377155#c17

Follow these instructions. Even though I am on a later build, I still hadthe wrong drivers.
After following the instructions hotplugging worked directly without a reboot.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!