Poor Windows VM Performance with over 64GB RAM assigned

TheCrug

New Member
Sep 27, 2024
2
0
1
Hello all,

I’ve noticed issues with Windows VMs, either 2022 and 2025, and performance issues with having over 64GB of RAM assigned. It doesn’t appear to be exhaustion of RAM, as each host has over 1TB of RAM and only averages about 25-30% utilization, and I’ve tried different CPU profiles, options with no noticeable improvement. These are 32 core AMD EPYC Genoa hosts, specifically Dell R6615’s, and all other Linux VMs and Windows guests under 64GB seem to perform fine.

But boot times, application and script execution and overall usability really suffer past that 64GB mark, with very noticeable lag both in console and on RDP, along with the “System” process taking significant CPU percentages for much longer times between tasks and boot. Doesn’t seem to be tied to things like CCDs, NUMA given it’s a single socket and the core counts should be within a single node (8)

Things tried thus far with no noticeable improvement:
  1. Setting the CPU Type between EPYC-Genoa, x86-64-v2, x86-64-v4, host
  2. Enabling CPU flags for 1 GB pages, ibpb, virt-ssbd, amb-ssbd, hv-tlbflush
  3. Confirmed in Bios that the High Performance profile is set
Running out of ideas of what to check or try. Should note these hosts are still on 8.4.1.
 
To use giant pages (hugepages 1024Mb) u need:
- explicitly set fixed number of such pages in boot loader (/etc/default/grub or /etc/kernel/cmdline)
- set hugepages: 1024 in vm conf file (manually)

I would also recommend setting up numa topology in vm config file (manually) - check PVE docs
With cpupower (linux-cpupower package) check current governor, idle states and frequency profile)
 
Last edited:
  • Like
Reactions: TheCrug
To use giant pages (hugepages 1024Mb) u need:
- explicitly set fixed number of such pages in boot loader (/etc/default/grub or /etc/kernel/cmdline)
- set hugepages: 1024 in vm conf file (manually)
This was it, thank you. It didn’t seem like I had to set hugepages in the conf file manually, but have the pdpe1gb CPU flag set and it allowed me to go above 64GB.

Since I’ve spent the better part of a week trying to google this, for future people searching the same issue, the steps to fix this were:

  • SSH into the host running the VM you want 64GB+ of ram on
  • Edit the /etc/default/grub file and change the GRUB_CMDLINE_LINUX_DEFAULT line to include hugepagesz=1G default_hugepagesz=2M in quotes
  • run an update-grub and reboot
  • Stop or Shutdown the VM, enable the pdpe1gb flag in the VM settings or add hugepages: 1024 manually to the vm conf file in /etc/pve/qemu-server/<ID>.conf, and start the VM
Thanks again for the help I’ve been chasing these performance issues for some time.
 
This was it, thank you. It didn’t seem like I had to set hugepages in the conf file manually, but have the pdpe1gb CPU flag set and it allowed me to go above 64GB.

Since I’ve spent the better part of a week trying to google this, for future people searching the same issue, the steps to fix this were:

  • SSH into the host running the VM you want 64GB+ of ram on
  • Edit the /etc/default/grub file and change the GRUB_CMDLINE_LINUX_DEFAULT line to include hugepagesz=1G default_hugepagesz=2M in quotes
  • run an update-grub and reboot
  • Stop or Shutdown the VM, enable the pdpe1gb flag in the VM settings or add hugepages: 1024 manually to the vm conf file in /etc/pve/qemu-server/<ID>.conf, and start the VM
Thanks again for the help I’ve been chasing these performance issues for some time.

Please check
cat /proc/memory | grep Huge

In my setup I had to define:
hugepagesz=1G hugepages=N default_hugepagesz=1024M

Where N - is number of HP with respect to VM memory size

P.S as far as I know 1G and 1M hugepages cannot be combined (once again check /proc/memory)