The Reasons for poor performance of Windows when the CPU type is host

@t.lamprecht Your initial response here seemed somewhat dismissive and focused on the old hardware that was in use, understandable at first given the claimed impact, but this has been demonstrated on the latest Intel desktop-class processors as well now and by a number of different users.

Could you please review the information that’s been gathered here since the initial post and provide an updated recommendation for users? Right now the available presets leave us with a choice between working nested virtualization (“host”) and useable performance (“x86_64-v*”), with the next available option seeming to be each user going through the process of creating a custom CPU type that is a close match for “host” while working around this issue.

I suspect that you and the Proxmox team would prefer to support a simpler best practice than any user with a Windows guest needing to fiddle with custom CPU types.
 
Last edited:
Please note that Proxmox VE already defaults to the x86-64-v2-AES for any VM created through the web UI's "Create VM" wizard since Proxmox VE 8.0 released two years ago.

Exposing the relevant flags for nested virtualization in the UI might be done, especially if it's only a small set of flags, we got the infrastructure for that already, so it would not be that much work.
And FWIW, there was also a proof of concept patches for a UI integration for the custom CPU models, which would lower the barrier significantly, which probably makes sense to have in any way. They were not picked up due to relatively low demand and might need some rebasing though.

Feel free to open an enhancement report in our Bugzilla instance and link this post here, I'd focus the request on allowing one to enable nested virtualization for VMs through the web UI, that is your core goal after all and leaves enough freedom for choosing a fitting implementation.
simpler best practice than any user with a Windows guest needing to fiddle with custom CPU types.
Well, to be fair, it's for any Windows guest that also needs nested virtualization, all others work just fine and performant with the default CPU type. Don't get me wrong, as written above, we're open with improving the status quo for those preferring the web UI, but please avoid adding overblown statements, they do not really help to advance one cause, and might even do the opposite...
 
  • Like
Reactions: weehooey-bh
Well, to be fair, it's for any Windows guest that also needs nested virtualization, all others work just fine and performant with the default CPU type. Don't get me wrong, as written above, we're open with improving the status quo for those preferring the web UI, but please avoid adding overblown statements, they do not really help to advance one cause, and might even do the opposite...

I have been in your shoes many times and understand that sentiment, but I think that statement was only minorly overblown. Nested virtualization with a Windows guest is not at all an exotic requirement since it’s required for WSL. I don’t have any kind of usage stats, but I’d guess that given the already technical nature of many Proxmox users that using WSL within a Windows guest is far from uncommon. And the best available recommendations for a Windows guest with nested virt (on the Wiki, a community effort to be sure but still the best available) now point you in a direction that leads to clear poor performance, and it seems unclear what to update those best practices to.

Plus, even if Proxmox’s default is not “host”, the fact that “host” is the currently recommended best practice for some use cases and seems like a reasonable setting for most use cases where it isn’t clearly contraindicated does, I bet, lead people to use it whether they strictly need nested virtualization or not. The fix is certainly simpler if you don’t need nested virtualization, that’s true, but discovering the link between CPU type and poor performance was not easy and I would bet generates plenty of support noise on its own.

In any case the motivation was to call out that you might be looking at a burgeoning support headache if this advice proliferates and people make predictable errors in creating custom CPU types, and may want to get ahead of it. I’m not under any delusions that my personal usage of Proxmox is important to anyone, nor am I worried about my own ability to figure that out given some time and effort. :)

I’m sure my prominent “New Member” title makes for some warranted default skepticism, whatever the content of the argument, but I’m not aiming to exaggerate here.
 
Last edited:
I've run another test after creating a custom CPU type which is host minus the specific two flags that the OP @Kobayashi_Bairuo claimed were the source of the issue, md-clear and flush-l1d:

Code:
❯ cat /etc/pve/virtual-guest/cpu-models.conf
cpu-model: host-windows-workaround
    flags -md-clear;-flush-l1d
    phys-bits host
    hidden 0
    hv-vendor-id proxmox
    reported-model host

Unfortunately, disabling only these two flags does not result in the same improvement in my Windows guest's performance that using x86_64-v3 does, so I suspect that something in the details of OP's analysis was incorrect. (But don't misunderstand me, I definitely appreciate OP's findings!)

I'm far from certain, but I would guess that the source of the error is from attempting to directly compare the CPU flag names from /proc/cpuinfo with QEMU's supported CPU flag names as reported by qemu-system-x86_64 -cpu help, which seem to have slightly different spellings in many cases. For example, cpuinfo reports md_clear, but QEMU spells it md-clear . I manually compared the flags reported by cpuinfo for host, Skylake-Client-v4, and x86_64-v3, but could not clearly identify corresponding QEMU flags for a handful of my i7-14700k's cpuinfo-reported flags (ibrs_enhanced tpr_shadow flexpriority ept ept_ad ospke) that were not in Skylake-Client-v4, and that was before attempting to match up all the vmx flags.

A reasonably safe and proper fix that keeps nested virtualization working while fixing performance is still unclear to me at this point, and I don't love the idea of fiddling with individual CPU flags that I don't individually understand the impact of. Skylake-Client-v4 seems to be a closer fit for my host CPU's flags than x86_64-v3 and does still have good performance, so I guess my next step is to attempt to add on the right nested virtualization flags and see what that does.

If anyone knows...should I worry about trying to match my host's vmx flags closely, or is something simpler like Skylake-Client-v4,+vmx likely to be ok? There are quite a few vmx flags reported by cpuinfo for the host CPU:

Code:
processor       : 27
vendor_id       : GenuineIntel
cpu family      : 6
model           : 183
model name      : Intel(R) Core(TM) i7-14700K
stepping        : 1
microcode       : 0x12f
cpu MHz         : 4300.014
cache size      : 33792 KB
physical id     : 0
siblings        : 28
core id         : 43
cpu cores       : 20
apicid          : 86
initial apicid  : 86
fpu             : yes
fpu_exception   : yes
cpuid level     : 32
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
vmx flags       : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs            : spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb rfds bhi
bogomips        : 6835.20
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:
 
Last edited:
I've run another test after creating a custom CPU type which is host minus the specific two flags that the OP @Kobayashi_Bairuo claimed were the source of the issue, md-clear and flush-l1d:

Code:
❯ cat /etc/pve/virtual-guest/cpu-models.conf
cpu-model: host-windows-workaround
    flags -md-clear;-flush-l1d
    phys-bits host
    hidden 0
    hv-vendor-id proxmox
    reported-model host

Unfortunately, disabling only these two flags does not result in the same improvement in my Windows guest's performance that using x86_64-v3 does, so I suspect that something in the details of OP's analysis was incorrect. (But don't misunderstand me, I definitely appreciate OP's findings!)

Back then, I did exactly the same and made the same observation.
I also fiddled around with other flags and combinations, but without success and rather quickly stopped going down this rabbit hole any further...
 
Heh, I’d love to stop going down this rabbit hole, but I still want to solve for adding nested virtualization onto the next closest preset (Skylake-Client-v4, as far as I can tell) now. I’ll have to give that a try later tonight.

My initial claim that nested virtualization probably has significant usage with Windows guests is seeming less reasonable than I initially thought based on these responses, I’m not sure now, but I didn’t suspect WSL usage in a guest was really that odd of a requirement. Perhaps it is? In my case I’m aiming to use this guest for a thin client setup at my 3d printing+tinkering workbench, replacing a Windows laptop that was a temporary solution but that I don’t want to keep there permanently.

Perhaps it’s time to reevaluate whether Windows is really the best option for this setup of mine, instead of simply shifting my existing setup into a VM one-for-one. I admit my preference for Windows here comes from a probably outdated heuristic of mine, which is that I’m likely to end up spending more time tinkering with or trying to work around the practical limitations of using desktop Linux. For this particular system, I’d like to prioritize it just working when I want to use it rather than be a time sink of its own. But clearly I’m not meeting that goal currently…after all here I am now sinking time into getting my WSL/Linux shell working within Windows. I use Linux in various forms plenty and clearly prefer it on the command line, but I’m not an open-source purist and I tend to weigh the overall experience over simply OSS vs not.
 
Last edited: