[SOLVED] Windows guests are extremely slow

May 21, 2025
30
4
8
Estonia
Hello

I came from Hyper-V last month where my VMs performed very well. The host server is the same, so I expected the same performance from a different hypervisor as well.

Is there some sort of best practice for Windows guests? That is current. I tried to follow whatever suggestions I could find on YouTube, but clearly none of it has helped.

This is a config of a domain controller VM:
Code:
agent: 1
balloon: 2048
bios: ovmf
boot: order=scsi0
cores: 6
cpu: host,flags=+hv-evmcs;+aes
efidisk0: Storage:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
machine: pc-q35-9.2+pve1,viommu=virtio
memory: 4096
meta: creation-qemu=9.2.0,ctime=1747927001
name: myVMname
net0: virtio=00:00:11:0D:00:17,bridge=vmbr0,tag=2
numa: 0
onboot: 1
ostype: win10
scsi0: Storage:vm-100-disk-2,backup=0,iothread=1,size=60G
scsihw: virtio-scsi-single
smbios1: uuid=0007f5b5-000c-0003-8000-bc739459000b
sockets: 1
startup: order=1
tags: dc;win
tpmstate0: Storage:vm-100-disk-1,size=4M,version=v2.0
vcpus: 1
vmgenid: 28498000-2000-4500-000d-673ac4000063

Let me know if I need to include something else.
---
Edit: I thought maybe adding more cores would help so I increased core count from 4 to 6 yesterday and added the cpu flags, it had no effect.
 
Last edited:
i understand that you don't want to disable VBS but if you struggle with performance then that's most likely the culprit.

https://forum.proxmox.com/threads/t...-of-windows-when-the-cpu-type-is-host.163114/
I changed the processor type from Host to x86-64-v2-AES. Reduced core count to 4 again and the performance is now at expected levels, but now VBS is enabled, but not running, which is not ok. Not only doesn't Proxmox support Kernel DMA, now I can't use VBS either.

I'm complaining because the performance degradation wasn't nearly as bad on Hyper-V with VBS. I understand that the real problem here is the older CPU. I've heard that performance drop can be up to 30% with mitigations, but I'm experiencing way above 50 with VBS enabled (which requires the Host processor type). It was only slightly better than a slideshow.
Can't go back to Hyper-V either because 2019 was the last free version.

I guess case closed :(
 
Wrote a small script to dected proper cpu type for VM configuration:


Bash:
#!/usr/bin/env bash
# cputest.sh — Detect x86-64 micro-arch levels (v2/v3/v4) and show a tidy summary.
# Options:
#   --no-color   disable ANSI colors
#   --full       also print the full "common flags across all cores" list

set -euo pipefail

# ------------------------ Options & color ------------------------
USE_COLOR=1
SHOW_FULL=0
for arg in "$@"; do
  case "$arg" in
    --no-color) USE_COLOR=0 ;;
    --full)     SHOW_FULL=1 ;;
  esac
done

if [[ ! -t 1 ]]; then USE_COLOR=0; fi

if (( USE_COLOR )); then
  C_ORANGE=$'\e[38;5;208m'
  C_GRAY=$'\e[90m'
  C_GREEN=$'\e[32m'
  C_CYAN=$'\e[36m'
  C_RESET=$'\e[0m'
  C_BOLD=$'\e[1m'
else
  C_ORANGE=""; C_GRAY=""; C_GREEN=""; C_CYAN=""; C_RESET=""; C_BOLD=""
fi

# ------------------------ Header ------------------------
print_header() {
cat <<'EOF'
               P R O X M O X   V M   C P U   C H E C K

EOF
}

print_header
echo

# ------------------------ Collect flags ------------------------
mapfile -t all_flags < <(grep -E '^flags\s*:' /proc/cpuinfo | cut -d: -f2- | sed 's/^[[:space:]]*//')
if ((${#all_flags[@]}==0)); then
  echo "Could not read CPU flags from /proc/cpuinfo." >&2
  exit 1
fi

# Build intersection of flags across all cores
read -ra base <<< "${all_flags[0]}"
declare -A present
for f in "${base[@]}"; do present["$f"]=1; done
for ((i=1;i<${#all_flags[@]};i++)); do
  declare -A cur=()
  read -ra tmp <<< "${all_flags[$i]}"
  for f in "${tmp[@]}"; do cur["$f"]=1; done
  for f in "${!present[@]}"; do
    [[ -n "${cur[$f]:-}" ]] || unset 'present[$f]'
  done
done

has() { [[ -n "${present[$1]:-}" ]]; }
require_all() { for f in "$@"; do has "$f" || return 1; done; }

# ------------------------ Levels (psABI/GCC aligned) ------------------------
v2_req=(cx16 lahf_lm popcnt sse4_1 sse4_2 ssse3)
v3_req=(avx avx2 bmi1 bmi2 f16c fma movbe xsave)
has_lzcnt() { has lzcnt || has abm; }
v4_req=(avx512f avx512dq avx512cd avx512bw avx512vl)

level=1
if require_all "${v2_req[@]}"; then
  level=2
  if require_all "${v3_req[@]}" && has_lzcnt; then
    level=3
    if require_all "${v4_req[@]}"; then
      level=4
    fi
  fi
fi

cpu_model=$(grep -m1 '^model name' /proc/cpuinfo | cut -d: -f2- | sed 's/^[[:space:]]*//')

# ------------------------ Pretty printing helpers ------------------------
comma_join() { local IFS=", "; echo "$*"; }
print_cols() { # prints words in N columns (default 4)
  local COLS=${1:-4}; shift
  local i=0
  for w in "$@"; do
    printf "  %-14s" "$w"
    (( ++i % COLS == 0 )) && printf "\n"
  done
  (( i % COLS )) && printf "\n"
}

# ------------------------ Output ------------------------
echo "${C_BOLD}CPU:${C_RESET} $cpu_model"
case $level in
  4) echo "${C_BOLD}Detected level:${C_RESET} ${C_ORANGE}x86-64-v4${C_RESET}" ;;
  3) echo "${C_BOLD}Detected level:${C_RESET} ${C_GREEN}x86-64-v3${C_RESET}" ;;
  2) echo "${C_BOLD}Detected level:${C_RESET} ${C_CYAN}x86-64-v2${C_RESET}" ;;
  1) echo "${C_BOLD}Detected level:${C_RESET} x86-64-v1 baseline" ;;
esac
echo

echo "${C_BOLD}Key capabilities:${C_RESET}"
# Show a concise, readable summary
summary_flags=(avx avx2 fma f16c bmi1 bmi2 movbe lzcnt aes pclmulqdq sha_ni adx invpcid fsgsbase rdseed rtm hle)
have=(); for f in "${summary_flags[@]}"; do [[ -n "${present[$f]:-}" ]] && have+=("$f"); done
print_cols 6 "${have[@]}"
echo

echo "${C_BOLD}Level criteria check:${C_RESET}"
printf "  v2 requires: "; print_cols 6 "${v2_req[@]}"
printf "  v3 adds:     "; print_cols 6 "${v3_req[@]}" lzcnt
printf "  v4 adds:     "; print_cols 6 "${v4_req[@]}"
echo

if (( SHOW_FULL )); then
  echo "${C_BOLD}Common flags across all cores:${C_RESET}"
  mapfile -t all_present_sorted < <(printf '%s\n' "${!present[@]}" | sort)
  # print in 6 columns for compactness
  print_cols 6 "${all_present_sorted[@]}"
fi
exit 0
 
  • Like
Reactions: somebodyoverthere
I changed the processor type from Host to x86-64-v2-AES. Reduced core count to 4 again and the performance is now at expected levels, but now VBS is enabled, but not running, which is not ok. Not only doesn't Proxmox support Kernel DMA, now I can't use VBS either.

I'm complaining because the performance degradation wasn't nearly as bad on Hyper-V with VBS. I understand that the real problem here is the older CPU. I've heard that performance drop can be up to 30% with mitigations, but I'm experiencing way above 50 with VBS enabled (which requires the Host processor type). It was only slightly better than a slideshow.
Can't go back to Hyper-V either because 2019 was the last free version.

I guess case closed :(
Have you tried to add +vmx flag to the cpu line in your VM config to enable VBS again?
 
I'm in the process of setting up two newer servers, and I'm going to decomm the existing solo hypervisor.
Maybe the Xeon Silver 4214 doesn't have that issue?