CPU types, word of caution

IsThisThingOn

Active Member
Nov 26, 2021
214
93
33
Hi everyone,

This is just in case someone stumbles upon the same error while troubleshooting.
I am not asking for help, though I’m open to explanations if anyone has one.

Roughly two years ago, I created a new Proxmox host with VMs.
Two weeks ago, I read about CPU types—something I had never looked into before—and realized I had been using the default x86-64-v2.
Since I don’t run a cluster, I thought it was time to switch to the host CPU type to take advantage of that sweet ZFS speed.

Unfortunately, setting the CPU type to host made some Windows 11 VMs extremely slow—clicks in Explorer or network drives became sluggish.
Strangely, a newer Windows 11 VM (created a few months ago) was unaffected.
I didn’t notice any difference in performance for the Linux VMs.

Switching back to x86-64-v2 made everything speedy again.
Setting it to x86-64-v4 didn’t work for my Intel Xeon E-2236 (which seems odd), but x86-64-v3 worked perfectly fine.

So, if your Windows VMs are acting up, try experimenting with different CPU types :)

BTW: I also just ran some PVE updates that did not ask for a reboot but affected QEMU. No idea if that is somehow related.
 
Windows server 2025 and possibly win 11 are slow when using cpu model host, atleast here on a Xeon E5-2697V3.

Try and disable nested virualization on your host:

add kvm-intel.conf to /etc/modprobe.d/ containing
Code:
options kvm-intel nested=N

Don't forget to reboot after.

Performance should then be back to normal.
 
This is because the Xeon E-2236 does not support AVX512. x86-64-v4 basically adds AVX512 instructions.
Ahh that makes sense.

How did I not find these threads?! :)

Money quote:

First pic x86-64-v2-AES : Windows detects VM (Virtual machine = Yes) so no nested Hyper-V running (required for VBS )
Second pic "host" : Windows doesn't detect VM ( Virtualisation = Enabled ) suggest Hyper-V or WSL enabled within Windows guest and/or optionnal args used ( hidden=off or kvm=off )

Thank you @steve72 for your input.
Feels strange to set this in the Hypervisor and not the host :)

Do I loose anything by doing that? For example, maybe I want to run Linux plus some Docker containers later, do they need some kind of virtualization support? Or does that only apply for Docker on Windows, because there is no native support?
I am sometimes surprised how much stuff nowadays is running in some kind of sandbox or VM, without me as a user even knowing.

This is probably irrational, but somehow I feel saver setting the CPU type to v3-aes than tinkering with modprobe.
 
Last edited:
For those who are running Linux VMs...

I don't consider any variance < 10% to be significant given the inherit variability of benchmarks. I used sysbench v1.0.20

BIOS vs UEFI: BIOS is faster but UEFI is becoming the norm. Less than 10% difference.

Host vs. x86-64-v4: All sysbench benchmarks were about the same except for the threads test which was 3x faster!

Just a quick look.