Physical vs. Virtual CPU limits

bearhntr

Member
Sep 9, 2022
167
13
23
Atlanta, GA USA
I have just built a new Proxmox 7.2-7 VM host with 64GB RAM and (1) Core i5-6500 CPU. The BIOS has all of the VT-d and VT-x settings enabled. This CPU has 4 cores. When I start building VMs...am I limited to 4 CPUs across all of them? Meaning if I make a VM with 1 Socket 2 Cores - I could then do ONE more like that or 2 more (1 Socket 1 Core)? Is there a tool or something to help figure this all out?

1663702112572.png
 
if you choose the safest way 1 vcpu = 1 thread, when your two vms of two cores use cpu 100% then physical cpu will use 100% too without need to "wait" another vm. but vm will be cpu limited to their vcpu, in your case to 50% of your cpu. more experimented users will correct me.
but vm will never use 100% cpu during hours, so we over provisionning cpu.
vcpu allocation depend of number of VMs and tasks running in VMs.
for my cases, i use fewers hosts, with only 2 Windows VMs , i set each vm with vcpu = physical threads, it allow the vm to use all physical cpu but slowdown if two vm use 100% together of their cpu.
sorry for my english
 
Last edited:
  • Like
Reactions: bearhntr
if you choose the safest way 1 vcpu = 1 thread, when your two vms of two cores use cpu 100% then physical cpu will use 100% too without need to "wait" another vm. but vm will be cpu limited to their vcpu, in your case to 50% of your cpu. more experimented users will correct me.
but vm will never use 100% cpu during hours, so we over provisionning cpu.
vcpu allocation depend of number of VMs and tasks running in VMs.
for my cases, i use fewers hosts, with only 2 Windows VMs , i set each vm with vcpu = physical threads, it allow the vm to use all physical cpu but slowdown if two vm use 100% together of their cpu.
sorry for my english
A little confusing. Here is my scenario...

I would like to have 4 (maybe 5) VMs on this box. All of them are not truly CPU intensive apps (HomeAssistant, pfSense (or OPNsense), OpenWRT for a Wireless AP and possibly some of the other HomeAssistant add-ons). I would like each VM to have 4-16GB RAM and 1 Socket/2 Cores.

I am familiar with VMWare ESXI and on true-blue server with XEON CPU...and have setup 10-12 VMs with 4 CPUs and the host in no way has that many threads on the CPU.
 
You can assign as many vcpu's as you want to VM's. You just need to be aware that if you are oversubscribed and multiple VM's are busy at the same time they will get reduced performance relative to not being oversubscribed. It works the same as VMWare in that regard.
 
  • Like
Reactions: metaphase
Maybe this helps:
Note It is perfectly safe if the overall number of cores of all your VMs is greater than the number of cores on the server (e.g., 4 VMs with each 4 cores on a machine with only 8 cores). In that case the host system will balance the Qemu execution threads between your server cores, just like if you were running a standard multi-threaded application. However, Proxmox VE will prevent you from starting VMs with more virtual CPU cores than physically available, as this will only bring the performance down due to the cost of context switches.
https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu

In short: With your physical 4 threads, you can potentially have e.g.: 10 VMs with 4 vCPUs each. But no VM can have more than 4 vCPUs.

While you can overcommit CPU-resources pretty good; you should not do it with your memory, because before the PVE-host would run out of memory, the OOM-killer will start to kill processes (afaik starting with the one(s) with the at the moment highest memory utilization) and those high memory consuming processes are most likely your VMs.
 
Maybe this helps:

https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu

In short: With your physical 4 threads, you can potentially have e.g.: 10 VMs with 4 vCPUs each. But no VM can have more than 4 vCPUs.

While you can overcommit CPU-resources pretty good; you should not do it with your memory, because before the PVE-host would run out of memory, the OOM-killer will start to kill processes (afaik starting with the one(s) with the at the moment highest memory utilization) and those high memory consuming processes are most likely your VMs.
This sort of helps.

The machine has 64GB RAM - not sure that 4 or 5 VMs with 4-8 GB RAM each will reach that -- but also have to realize that Proxmox is going to use RAM too.

So if I can have (e.g. 10 VMs with 4x Cores each - as I read your message) - I should be good?

I think the Sockets vs Cores vs vCPUs is where I am confused.
 
I wouldn't allocate each VM 4 vCPUs just because thats possible. The more you overprovision your CPU the bigger the process queue will get, potentially slowing down everything. If a VM will run fine with 1 vCPUs I would only assign 1 vCPU. You can always add more vCPUs later in case you see that the VM average CPU utilization is sometimes above 80+%.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!