Single VM on host, how to allocate all cpu?

AxisNL

New Member
Jun 4, 2022
12
0
1
Let's say I have a small Proxmox host with an E-2236 CPU (single socket, 6 cores, with hyperthreading you get 12 'virtual cores'.

I'm hosting a single VM on this hypervisor. (because virtualization is great, proxmox backups are cool, etc). I understand it is quite a unique use case.

How would I use the maximum CPU for this single VM?

In my VMware courses I learned you should give the vm a maximum 6 vcpu's in this case, and the hypervisor will schedule these 6 vcpu's to go to the physical cores. The hypervisor is 'hyperthreading-aware' and will not schedule tasks on two virtual cores that are on the same physical core. If you give more cores, performance will go down.

Does the same theory apply to proxmox? (=KVM?)

Any theory from the experts? Best practices? Experiences?
 
AFAIK, in Linux all (CPU-) threads/core are equal (from an OS point of view) and therefore scheduling threads (lightweight process) will just scale linearly. If you have a positive or negative impact on your workload will hugely depend on the workload.

Hyperthreading is also one word to describe different evolutionary states of hyperthreading development. It evolved over the years so it got better and better (penalty gets lower and lower), but in general, virtualization and optimized performance are two opposite goals to optimize for so your use case is not optimizable per se, you will have tradeoffs, one is the best-performance from you CPU that IMHO will not be on-par with bare metal.
Virtualization - as the name suggests - abstracts the real hardware, so your VM will not be able to "see" the real CPU with its layout and performance-optimized code (e.g. cache coherence, thread coherence etc., memory locality with numa) will just not work as good as on bare metal. The same goes for memory, it'll also be not as fast as on bare metal, same with the storage. You will have of course a ton of new features, but all come at a performance-cost.

In my tests, if you have HT enabled and are NOT using all threads for your workload, you will get worse results until you use all threads and then it'll be faster. So if you have the same amount of vCPUs in your VM as on your host, you will most probably get the most out of it (performance wise). You should also passthrough the CPU (type host) in order to have all features available (so no further virtualization abstraction).

Having the same amount of cores (vCPU) inside than outside is no big problem, beacuse each vCPU is a process/thread on your host os so that it'll just be scheduled and will share the available power with PVE.

If you really want to have the best "raw" performance and you're on Linux, just use LX(C) containers.
 
  • Like
Reactions: melroy89 and guletz
Well, one way to find out for sure, and that's testing..

I have a spare machine (I'm migrating all my boxes from ESX to Proxmox), and I did a clean install of Proxmox 7.2 on it (and later a clean install of ESXi 7.0f), created a single vm, and tested that vm with different vCPU's. I know a single VM is not a normal use case, but it is mine ;)

I must say I am flabbergasted. Both with the great performance of ESX (up to 99% of native) and the bad performance of Proxmox (up to 71% of native), I would have expected them to both be around 85-90%.

One conclusion is that there seems to be no reason to disable hyperthreading. Funny thing it that ESX keeps pushing out more performance if you add more cpu (although the %RDY count goes up really bad, not recommended).

I still like Proxmox a lot, but the single vm performance is a bit disappointing..

single-vm-performance.png
 
Did you use CPU type = "host" for the VM or default kvm64 (which should loose performance as not all CPU features can be used)?
 
Last edited:
Did you use CPU type = "host" for the VM or default kvm64 (which should loose performance as not all CPU features can be used)?
First time I even heard about this option ;) Will test with this feature.
 
Well, one way to find out for sure, and that's testing..
I could not have said it better! Thanks for sharing your numbers and looking forward to the type=host benchmark. If you're at it, could you also try to benchmark the memory access? That would also be interessting.
 
Here's the update (I left out the HT-disabled stats from the graph):

With type=host, the performance is a bit faster than ESX. Why is this not the default!? I wonder what other kind of performance stuff I'm missing out with default configs.


single-vm-performance-update1.png


oh, and sorry, not too much time left to test the memory..
 
With type=host, the performance is a bit faster than ESX. Why is this not the default!? I wonder what other kind of performance stuff I'm missing out with default configs.
Because a virtualization product is optimized for virtualization, so that you can easily migrate from one host to the next. This is not as easy as it sounds, if you optimize for performance. The default is for easier virtualization and not for performance, those are contrary goals. In your setup however, you can manually optimize for performance, you don't need good live migration capabilities.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!