Big performance difference between baremetal and VM.

PFilip

New Member
Jan 21, 2024
6
0
1
Poland
Hello,
I have issue with performance on VM. I tried messing with config, args and stuff, but nothing seems to work.
I tried lowering cores in hardware section of VM, lowering memory, setting args to stock, changing CPU type, reinstalling OS, creating different VM, disabling migitations etc.
As you can see from the passmark results that I provided, the difference in performance is big.

Proxmox host spec:
Xeon E5-2630v4
DDR4 4x8GB (2666MHz CL19)
XFX RX 6600 (passthrough to VM)
PVE 8.1.4 (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve))

VM config:
Code:
agent: 1
args: -cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,+invtsc'
balloon: 0
bios: ovmf
boot: order=sata0;ide2;net0
cores: 16
cpu: host
efidisk0: local-lvm:vm-102-disk-1,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:05:00,pcie=1,romfile=vbios-xfxrx6600.bin
hotplug: disk,network,usb
ide2: local:iso/virtio-win-0.1.240.iso,media=cdrom,size=612812K
machine: pc-q35-8.1
memory: 10240
meta: creation-qemu=7.2.0,ctime=1679563559
name: InkaVM-v2
net0: virtio=82:55:13:1F:40:7E,bridge=vmbr0
net1: virtio=CE:DD:E6:82:D7:BF,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win10
protection: 0
sata0: local-lvm:vm-102-disk-0,cache=writeback,discard=on,size=120G,ssd=1
scsi1: lexar-1000:vm-102-disk-0,backup=0,cache=writeback,discard=on,size=400G,ssd=1
scsi2: chrupek-500:102/vm-102-disk-0.qcow2,backup=0,cache=writeback,discard=on,size=350G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=d64f6615-0f4d-8016-ffff-1de971325348,manufacturer=UEZpbGlwVGVjaA==,product=WDk5LVR1cmJv,version=MDYuMDQuMjAyMw==,serial=RUY1REQ0MjA=,sku=U0tVTEw=,family=SW5rdcWbIEdhbWluZw==,base64=1
sockets: 1
tablet: 0
tags: bandytamoc;inkavm;logiceksploduje
vga: none
virtio0: lexar-1000e:102/vm-102-disk-0.qcow2,discard=on,iothread=1,size=1G
vmgenid: 06960840-91a6-4fe8-bfb0-cc1fb5a804bb

Here is passmark result from host: LINK
Passmark result from host on Windows:
windows-baremetal.png
Passmart result from VM:
PerformanceTest64_suYZX2B5oO.png

PS: English isn't my first language, sorry for mistakes

Thanks,
Filip
 
Let me summarize briefly, you have a higher result under a direct Windows installation that can use all 40 threads than in a VM that you have limited to 16 cores? I'll recapitulate the whole thing for you and say, it's no wonder.

But considering you stole 24 "cores" from the VM, the result isn’t bad at all.
 
Let me summarize briefly, you have a higher result under a direct Windows installation that can use all 40 threads than in a VM that you have limited to 16 cores? I'll recapitulate the whole thing for you and say, it's no wonder.

But considering you stole 24 "cores" from the VM, the result isn’t bad at all.
I have 20 "cores" total on my CPU, not 40.
Here is test with 20 "cores" on VM.
Single thread is worse than on baremetal.
1705852736506.png
 
Last edited:
I have 20 "cores" total on my CPU, not 40.
That's right, I always think in dual socket when I read the E5 CPUs :D

But then you still have 4 fewer cores in the VM than Windows had before.
You can only compare it if it can use the same basis in something. Virtualization will always cost you something.
 
That's right, I always think in dual socket when I read the E5 CPUs :D

But then you still have 4 fewer cores in the VM than Windows had before.
You can only compare it if it can use the same basis in something. Virtualization will always cost you something.
I set VM to 20 cores in that latest test.
Just to make things straight. Performance difference between linux host and windows vm is big.
 
I've been talking to PFilip out of band so I know a little bit about this situation.

But then you still have 4 fewer cores in the VM than Windows had before.
You can only compare it if it can use the same basis in something. Virtualization will always cost you something.
Right, but please consider the following facts that I don't think were emphasised strongly enough from the original post:

Using Passmark:
  • The Windows baremetal performance of this CPU gives about 1700pts for the single-thread score
  • The Proxmox baremetal score (yes, there's a pt_linux version) give about 2000pts (!) for single-thread
  • The Windows VM (16 "cores" passed through, so 16 threads) single-thread performance is only around 1400pts (note: whether 16 or 20 is passed through has no bearing on the single-thread score, and frankly in their testing the impact on the multi-thread score was marginal too)
Not only is that a HUGE difference in terms of Windows baremetal-vs-VM, but also Proxmox-vs-Windows baremetal (300pts each way, not insignificant by any means).

When we spoke I suggested various things like
  • ensuring it was using CPU type "host"
  • duplicating the CPU flags into the args: line and adding combinations of flags like +invtsc
  • trying host,hv-passthrough
  • manually specifying various Broadwell CPU types
In all cases, the VM single-thread performance was about 1400 points, so, unaffected. The consequence of this is that real gaming performance is significantly worse - they tested using a game (using GPU passthrough, which seems to work fine) that was assuredly not GPU bound and the windows baremetal performance was over 90fps whereas the windows VM barely managed 60fps.

I'm far from an expert but I strongly suspect that, if the hardware can get 1700pts single-thread performance in Windows baremetal and 2000pts in Proxmox baremetal, a Windows VM result of only 1400pts indicates not that this is typical overhead that the user just has to live with, but rather that surely, something is misconfigured.

Perhaps PFilip could also spin up a Linux VM and run the pt_linux tool there?
 
Last edited:
OK, so we're getting in the region of 1500-1600 single-thread passmark points in a Linux VM. A little higher than the Windows VM (1400), but not exactly surprising - it's possible that the Linux VM outperforming the Windows VM (1600 vs 1400) and the Proxmox baremetal outperforming the Windows baremetal (2000 vs 1700) have a common cause unrelated to our main predicament (i.e. "Windows is bloated"). But we are nonetheless left with what to me seems like a much bigger VM performance loss than I would expect.

It might be useful to, for now, ignore Windows entirely, and focus discussion of "how to claw back that performance inside the VM" to the comparison between Proxmox baremetal and the Linux VM. I am sure that PFilip would sincerely appreciate if any Proxmox users or forumgoers could chime in on how they've set their VMs up with this in mind, especially in the context of their CPU type. The Xeon E5-2630v4 is Broadwell... right?
 
You might also want to disable mitigations in the PVE hosts bootloader for more performance and install the intel microcode package in PVE. I guess you already use "host" CPU type, virtio SCSI single + SCSI, virtio NIC, disabled ballooning and KSM according to Win best practices?
Also don't forget to disable "core isolation" in the windows VM!
 
You might also want to disable mitigations in the PVE hosts bootloader for more performance and install the intel microcode package in PVE. I guess you already use "host" CPU type, virtio SCSI single + SCSI, virtio NIC, disabled ballooning and KSM according to Win best practices?
Also don't forget to disable "core isolation" in the windows VM!
Intel microcode package wasn't installed. After installing and rebooting, results in passmark on Linux VM and on host are similar.
I've disabled migitations by `mitigations=off` in grub config on host.
I tried using "host" CPU type, virtio SCSI (with write back enabled) + virtio NIC.
Ballooning and KSM are disabled.
1705865323800.png
 
Also I checked mitigations on Linux VM and they are enabled (?), but on host shows disabled.
How is this going? Did you try disabling the mitigations in the guests? If you've made any progress, it could be useful for other users in the future if you share it publicly.
 
  • Like
Reactions: Sp3ctre18

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!