1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

CPU Sockets VS. Cores/Socket

Discussion in 'Proxmox VE 1.x: Installation and configuration' started by hamed, Sep 14, 2010.

  1. hamed

    hamed Member

    Joined:
    Sep 6, 2010
    Messages:
    35
    Likes Received:
    0
    Hi,
    Could anyone please describe the difference between CPU Sockets and Cores per socket in VM resources?

    Thanks
     
  2. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,123
    Likes Received:
    58
    this feature is mainly implemented to meet licensing requirements. e.g. if your host is powered by a single intel quad core cpu (= 1 socket with 4 cores) you can assign up to 4 cpu to a KVM guest. But if you assign 4 sockets and you run a windows OS which is limited to 2 sockets - eg. winxp you cannot use 4 cpus, you will see only 2 due the license limitation.

    therefore you assign 1 socket and 4 cores - now winxp shows 4 cpus.
     
  3. hamed

    hamed Member

    Joined:
    Sep 6, 2010
    Messages:
    35
    Likes Received:
    0
    Thanks for reply,
    So, there isn't any performance difference between these two?
    for example, if I create a VM with 1 socket and 4 cores, the performance is the same as 4 sockets and 1 core?
     
  4. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,123
    Likes Received:
    58
    yes, no difference.
     
  5. JustaGuy

    JustaGuy Member

    Joined:
    Jan 1, 2010
    Messages:
    323
    Likes Received:
    1
    I'm finding that it depends on the workload.
    When routing packets in a VM having ingress & egress share a L2 cache matters.

    More detail is available on the KVM site here, particularly with regard to passing the instruction capacity of the physical host's processor to the guest OS, as opposed to making use of the default QEMU vCPU.

    This introduces a possibility of grave error conditions in the case of migrating to a machine with even a slightly different CPU, due to there being no QEMU vCPU software layer to buffer & properly interpret the instructions being communicated by the guest.

    Today I tried allocating 6 cores on 1 vCPU socket, so as to leave at least 2 cores for the hardware interface's multiqueue, and became concerned about the virtual cache size.
    Adding a second core to give it more cache isn't as easy as it seems.
    Consider sorting which interface is on which cache, then manually assigning CPU affinity to each interface in such a manner that inter-cache transfers are minimized... on 28 interfaces.
    I have to come back to it tomorrow.

    What I think I'll try next is add the "args: -cpu host" line to VMID.conf, and start looking into exactly what each of these instructions is doing.
    Then see if it's feasible and even possible to assign CPU affinity to an enclosure-bound KVM and have the router VM make all the packets move through the L2 cache of one physical socket in Proxmox.


    TL;DR- Most of the time, it doesn't make a difference.
     
    #5 JustaGuy, Nov 20, 2010
    Last edited: Nov 20, 2010

Share This Page