Windows VM really Bad Memory Performance!

Anti-Ctrl

Member
Jan 23, 2021
37
7
13
Hello there lovely people.

So, as the title says, Memory Performance is really bad. I tried to debug this since 3 or 4 Weeks now and I´m all out of Ideas. In a Linux VM i get around 24GB/s with 1M BS which is around the maximum my Board/System can handle. I used the Phoronix Test Suite as a Measurement. In Windows 10 and Windows 7 i use AIDA64 and get around 1.5GB/s read and 900 MB/s write. I also ran Pheronix Test Suite on the Windows 10 VM and got around 100 Times Slower Speeds, compared to a Linux VM.

Things i tried:
Disable/Enable NUMA
Disabled HEPT in the VM and BIOS.
Enabled Hugepages on the Host and inside the Windows VM
Checked Memory on the Host
Changed Performance Options in the BIOS
Pinned CPU Cores
Updated to the Latest Stable Virt-IO Drivers
Asked in Level1Techs Forum
surely there is something missing from this List.

Is there ANYTHING i can test further ?
BTW, CPU Interrupts are in the High 30ks with spikes to 80ks, if the Windows VM is running.

My System:
CPU: Dual Intel x5675 Xeon
MB: Asus Z8NAD6
RAM: 96GB 1066Mhz DDR3 CL7 from Samsung. (On supported List)
Graphics Card: R9 380 (Passed Through to Windows VM)
PSU: 550W Cheapo. Not crappy, but not well either. Good one is on its Way.
EDIT: PSU Changed to Seasonic PX650W. No difference in Performance. (Why should it be, lol)
 
Last edited:
Have you tried without the GPU passthrough? Or the other way around. Have you tested the Linux VMs with GPU pass through as well?
 
did you install the ballooning driver or did you try without ballooning enabled?
 
Last edited:
As stated at the Beginning, i have already done that in various configuration types. It helps a bit, but its not NEARLY the Performance a VM should have, let alone Bare Metal would have.
sorry, came from another thread and just skimmed it.
 
Post your vm config.
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi1
cores: 12
cpu: host,flags=+pcid;+spec-ctrl;+pdpe1gb;+aes
efidisk0: VMData:105/vm-105-disk-1.raw,size=128K
hostpci0: 08:00,pcie=1,x-vga=1
machine: q35
memory: 8192
name: Windows
net0: virtio=AA:BB:65:8D:10:93,bridge=vmbr0,firewall=1
numa: 1
ostype: win10
scsi1: VMData:105/vm-105-disk-0.raw,discard=on,size=120G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=a426a383-4be5-465f-a272-e8433ee771e3
sockets: 2
vcpus: 6
vga: none
virtio0: VMData:105/vm-105-disk-2.raw,discard=on,size=250G
vmgenid: aa311f1e-f19b-4494-9702-c9d2da2ab92a
 
Well, this config looks like you tryed already everything out what's possible xD

I wanted to come with hugepages, but you tryed that probably either xD

Maybe it's something super stupid, like arc eats all the host ram and your guest rams gets realocated or something like that.
Dunno, but did you tryed to benchmark with 4gb ram vs 16gb ram inside the vm?

The last solution to increase somehow the performance at least, is to compile an optimized kernel. But i highly doubt that it will fix anything. Just will give you a bit better performance. (Like +10% at max)
 
Well, this config looks like you tryed already everything out what's possible xD

I wanted to come with hugepages, but you tryed that probably either xD

Maybe it's something super stupid, like arc eats all the host ram and your guest rams gets realocated or something like that.
Dunno, but did you tryed to benchmark with 4gb ram vs 16gb ram inside the vm?

The last solution to increase somehow the performance at least, is to compile an optimized kernel. But i highly doubt that it will fix anything. Just will give you a bit better performance. (Like +10% at max)
Yeah, i tried like everything i could imagine. Latest thing i did was disable IPv6 in the VM and updated to the latest virtIO drivers (v0.190). Didnt do the trick. Checked Bios like 12 Million times. Nothing i can see. Hugepages enabled or disabled is irrelevant for that Problem. ZFS is limited to 8 or 16GiB of Ram, dont know exactly, but there is plenty of free ram for the VM. It makes no difference if there is 4, 8 or 16GB of ram assigned. It MUST be something software related, as Linux VMs run just fine, with full RAM speed. I´m just about there to "throw away" that Proxmox install and switch to VMware ESXi, to check if the Problem persists, but it would be somewhat of a hassle tbh. Not THAT much work in Theory, but yeah.. IDK m8.. I really cant take it anymore tbh. I wish there where someone from the Proxmox Officals, which would have a look into that and could at least point me into some direction.
 
im compiling a kernel for you, you can try that to make sure.

About the officials, idk, probably they wouldnt look without any subscription etc. But your problem really doesn't look like a layer 8 problem xD

However, wait a bit, the compilation takes 2 hours or so, and im going to sleep, so will post a link tomorrow in the morning (like 8-9h from now)
 
im compiling a kernel for you, you can try that to make sure.

About the officials, idk, probably they wouldnt look without any subscription etc. But your problem really doesn't look like a layer 8 problem xD

However, wait a bit, the compilation takes 2 hours or so, and im going to sleep, so will post a link tomorrow in the morning (like 8-9h from now)
I really appreciate your effort, but idk if i want to run a custom kernel from someone on the Interwebzz lol. Maybe you can first tell me what you customized ?
Thanks anyways and have a good sleep, till l8r :)
 
5.11.5 pve-edge, building with gcc-11 and O3 + march=westmere flags xD
Aah and with a bit malware, to use your host as my personal cryptomining rig xD
That's a joke...

I mean you want to switch to vmware, so what you have to loose xD
 
https://cloud.golima.de/s/E7KiWLFgzSp4MNP

Here you go.

Ignore there the rdrand issue, has nothing todo with westmere.
Cheers.

View attachment 24414
View attachment 24415
Thanks m8, appreciate it!
Will test the Kernel out, but first i got some questions.
What do we expect to happen with this new kernel?
Are there Drawbacks?
Can i update in a normal fassion, or will it remove the new kernel?
Does this mess with ZFS?
Do i need to setup PCIe Passthrough again ?
And how can i revert all of this, if something goes wrong ?
 
It doesn't remove anything, you can uninstall it with apt remove --purge pve-edge*
And revert the apparmor config file in etc.

Zfs works the same way, it's the same version as in official 5.4 kernel, not a newer or older version, so no changes there.

Just install dpkg -i pve-edge...., update-initramfs -k all -u, update-grub, edit the apparmor config, done.

There shouldn't be any drawback, what i expect, not much tbh, it's meant as try on your performance case, if it gets faster etc...
There are to much code change between 5.4 and 5.11, so it's very possible that your bug is fixed if it's kernel and not qemu related.

Cheers
 
  • Like
Reactions: Anti-Ctrl

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!