CPU Sockets VS. Cores/Socket

hamed

Member
Sep 6, 2010
36
0
6
Hi,
Could anyone please describe the difference between CPU Sockets and Cores per socket in VM resources?

Thanks
 
this feature is mainly implemented to meet licensing requirements. e.g. if your host is powered by a single intel quad core cpu (= 1 socket with 4 cores) you can assign up to 4 cpu to a KVM guest. But if you assign 4 sockets and you run a windows OS which is limited to 2 sockets - eg. winxp you cannot use 4 cpus, you will see only 2 due the license limitation.

therefore you assign 1 socket and 4 cores - now winxp shows 4 cpus.
 
Thanks for reply,
So, there isn't any performance difference between these two?
for example, if I create a VM with 1 socket and 4 cores, the performance is the same as 4 sockets and 1 core?
 
yes, no difference.
 
  • Like
Reactions: mjw
Thanks for reply,
So, there isn't any performance difference between these two?
for example, if I create a VM with 1 socket and 4 cores, the performance is the same as 4 sockets and 1 core?

yes, no difference.

I'm finding that it depends on the workload.
When routing packets in a VM having ingress & egress share a L2 cache matters.

More detail is available on the KVM site here, particularly with regard to passing the instruction capacity of the physical host's processor to the guest OS, as opposed to making use of the default QEMU vCPU.

This introduces a possibility of grave error conditions in the case of migrating to a machine with even a slightly different CPU, due to there being no QEMU vCPU software layer to buffer & properly interpret the instructions being communicated by the guest.

Today I tried allocating 6 cores on 1 vCPU socket, so as to leave at least 2 cores for the hardware interface's multiqueue, and became concerned about the virtual cache size.
Adding a second core to give it more cache isn't as easy as it seems.
Consider sorting which interface is on which cache, then manually assigning CPU affinity to each interface in such a manner that inter-cache transfers are minimized... on 28 interfaces.
I have to come back to it tomorrow.

What I think I'll try next is add the "args: -cpu host" line to VMID.conf, and start looking into exactly what each of these instructions is doing.
Then see if it's feasible and even possible to assign CPU affinity to an enclosure-bound KVM and have the router VM make all the packets move through the L2 cache of one physical socket in Proxmox.


TL;DR- Most of the time, it doesn't make a difference.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!