Results 1 to 5 of 5

Thread: CPU Sockets VS. Cores/Socket

  1. #1
    Join Date
    Sep 2010
    Posts
    35

    Default CPU Sockets VS. Cores/Socket

    Hi,
    Could anyone please describe the difference between CPU Sockets and Cores per socket in VM resources?

    Thanks

  2. #2
    Join Date
    Aug 2006
    Posts
    9,919

    Default Re: CPU Sockets VS. Cores/Socket

    this feature is mainly implemented to meet licensing requirements. e.g. if your host is powered by a single intel quad core cpu (= 1 socket with 4 cores) you can assign up to 4 cpu to a KVM guest. But if you assign 4 sockets and you run a windows OS which is limited to 2 sockets - eg. winxp you cannot use 4 cpus, you will see only 2 due the license limitation.

    therefore you assign 1 socket and 4 cores - now winxp shows 4 cpus.
    Best regards,
    Tom

    Do you have already a Commercial Support Subscription? - If not, Buy now

  3. #3
    Join Date
    Sep 2010
    Posts
    35

    Default Re: CPU Sockets VS. Cores/Socket

    Thanks for reply,
    So, there isn't any performance difference between these two?
    for example, if I create a VM with 1 socket and 4 cores, the performance is the same as 4 sockets and 1 core?

  4. #4
    Join Date
    Aug 2006
    Posts
    9,919

    Default Re: CPU Sockets VS. Cores/Socket

    yes, no difference.
    Best regards,
    Tom

    Do you have already a Commercial Support Subscription? - If not, Buy now

  5. #5
    Join Date
    Jan 2010
    Posts
    288

    Default Re: CPU Sockets VS. Cores/Socket

    Quote Originally Posted by hamed View Post
    Thanks for reply,
    So, there isn't any performance difference between these two?
    for example, if I create a VM with 1 socket and 4 cores, the performance is the same as 4 sockets and 1 core?
    Quote Originally Posted by tom View Post
    yes, no difference.
    I'm finding that it depends on the workload.
    When routing packets in a VM having ingress & egress share a L2 cache matters.

    More detail is available on the KVM site here, particularly with regard to passing the instruction capacity of the physical host's processor to the guest OS, as opposed to making use of the default QEMU vCPU.

    This introduces a possibility of grave error conditions in the case of migrating to a machine with even a slightly different CPU, due to there being no QEMU vCPU software layer to buffer & properly interpret the instructions being communicated by the guest.

    Today I tried allocating 6 cores on 1 vCPU socket, so as to leave at least 2 cores for the hardware interface's multiqueue, and became concerned about the virtual cache size.
    Adding a second core to give it more cache isn't as easy as it seems.
    Consider sorting which interface is on which cache, then manually assigning CPU affinity to each interface in such a manner that inter-cache transfers are minimized... on 28 interfaces.
    I have to come back to it tomorrow.

    What I think I'll try next is add the "args: -cpu host" line to VMID.conf, and start looking into exactly what each of these instructions is doing.
    Then see if it's feasible and even possible to assign CPU affinity to an enclosure-bound KVM and have the router VM make all the packets move through the L2 cache of one physical socket in Proxmox.


    TL;DR- Most of the time, it doesn't make a difference.
    Last edited by JustaGuy; 11-20-2010 at 05:36 PM.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •