[SOLVED] Sockets vs Cores vs Threads vs vCPU vs CPU Units

ahriman

Member
Apr 26, 2022
25
4
8
I've read everything I can about this and I still don't understand how this works in Proxmox. I have a 2 x E5-2698 v3 Server which has 2 sockets, 16 cores per socket, 2 threads per core. So that's a total of 64 threads, 32 physical cores, 2 physical sockets.

Proxmox is showing this like this: 64 x Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz (2 Sockets). proxmox cpu.png

When I configure a VM, what would be the proper way to allocate 50% of my CPU power to a single VM? I want my VM to be able to use up to 100% of half of all my cores, and leave the rest of the cores free for other VMs and the host.

Would it be 2 sockets, 8 cores or 2 sockets, 16 cores? It isn't clear to me whether Proxmox is allocating threads or cores to each VM.
 
  • Like
Reactions: eronlloyd
Ignore Threads. Use cores for the thought process.
Your machine has 2x16C then.
So Max of your desired vcpu is 16 then.
If it is 1*16 or 2*8 also depends on your memory configuration (buzzword NUMA).
HTH
 
Ignore Threads. Use cores for the thought process.
Your machine has 2x16C then.
So Max of your desired vcpu is 16 then.
If it is 1*16 or 2*8 also depends on your memory configuration (buzzword NUMA).
HTH

So I did this configuration 2*8 for 16 total cores. But it seems the VM doesn't have access to the full 2 threads per core. Here's the lscpu output of the VM. Here's the line that concerns me: Thread(s) per core: 1. If qemu was passing through the entire core, shouldn't it be getting access to 2 threads instead of just 1 for each core?

Bash:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 48 bits virtual
CPU(s):                          16
On-line CPU(s) list:             0-15
Thread(s) per core:              1
Core(s) per socket:              8
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           63
Model name:                      Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
Stepping:                        2
CPU MHz:                         2299.996
BogoMIPS:                        4599.99
Virtualization:                  VT-x
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       512 KiB
L1i cache:                       512 KiB
L2 cache:                        64 MiB
L3 cache:                        32 MiB
NUMA node0 CPU(s):               0-7
NUMA node1 CPU(s):               8-15
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds:               Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown:          Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss
                                  ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_f
                                 req pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer
                                 aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp
                                  tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid x
                                 saveopt arat umip md_clear arch_capabilities
 
But it seems the VM doesn't have access to the full 2 threads per core
There is no concept of threads within guests...
This is only a concept of the hardware/host.


shouldn't it be getting access to 2 threads instead of just 1 for each core?
No this is a misconception.
You are not seeing the physical core. It is virtualized. AFAIK even with CPU pass through you don't see logical cores (e.g. threads)
 
I did a little experiment to test this hypothesis. I ran a stress test on a guest with a 2 socket x 8 cores configuration and monitored htop on both the guest and the host on the same time.

It seems the guest is only getting half the power of each core since only one thread is being passed through. Here's what htop looks like on the guest... all 16 cores are at full power:
guest.png

Now check the host, if you count them there are only 16 threads (4, 14, 22, 30, 31, 32, 33, 34, 35, 39, 41, 45, 52, 67, 58, 60) out of the 64 threads that are at 100%. The rest are idle and did not help the guest with the stress test:host.png

So, it seems to me that Proxmox is passing through threads and not cores to the guest. If it was passing through a full core, then shouldn't I see 32 host threads at 100%?

It looks like on a multithreaded CPU (if there is no way to pass through two threads per core to the guest) then you should pass through double the cores if you want to get all those threads utilized on the host.
 
  • Like
Reactions: Helmut101
1 vm core = 1 qemu thread.
This is what I have observed as well.

I ran the test again, this time increasing the VM to 2 sockets x 16 cores (which is the full capacity of my server). The result was only a 50% load on host, while the guest had a 100% load. So it seems on a multithreaded CPU it is safe to allocate 100% of the cores to a single VM and that will result in only a 50% CPU usage on the host.
 
note that in qemu cpu topology, we could also add virtual threads support. (Iit's supported in qemu , but not implemented in proxmox)

but it doesn't change nothing:

if you have in qemu : 2socket x 2 cores x 2 threads , or 2 sockets x 4cores , 1socket x 8 cores , it's always 8 threads in qemu.
 
note that in qemu cpu topology, we could also add virtual threads support. (Iit's supported in qemu , but not implemented in proxmox)

but it doesn't change nothing:

if you have in qemu : 2socket x 2 cores x 2 threads , or 2 sockets x 4cores , 1socket x 8 cores , it's always 8 threads in qemu.
the only difference might be with NUMA setups (where it can matter on which physical core/socket a specific qemu thread/vcpu is running)
 
So it seems on a multithreaded CPU it is safe to allocate 100% of the cores to a single VM and that will result in only a 50% CPU usage on the host.
Disagree.
This is what the system tells you from the stats. But one thread does not equal a real core. So technically your math is wrong. Check the WWW to understand the difference between threads (SMT) and real cores.
This is a standard misunderstanding IMHO...
 
no - it allocates one vcpu core, which is handled by one thread ;)
 
no - it allocates one vcpu core, which is handled by one thread ;)
Ok, on the guest machine this may be called a vCPU core, but on the physical hardware what is happening is that you are giving the guest access to a single thread. So I still think it would be clearer if the interface specified threads instead of cores.
 
no, you are configuring vCPU sockets and cores as part of the virtual guest hardware - that each such core then gets executed by Qemu as a separate, single thread is an implementation detail. the options offered by Qemu are actually more complex (we only expose sockets and cores, with dies (per socket) and threads (per core) always being 1). renaming cores with threads would be rather confusing - as sockets = 2, cores = 2 means 4 threads, not 2..
 
no, you are configuring vCPU sockets and cores as part of the virtual guest hardware - that each such core then gets executed by Qemu as a separate, single thread is an implementation detail. the options offered by Qemu are actually more complex (we only expose sockets and cores, with dies (per socket) and threads (per core) always being 1). renaming cores with threads would be rather confusing - as sockets = 2, cores = 2 means 4 threads, not 2..
Allow me to disagree. I think it is confusing right now.

I think a user (when deciding how many cores to allocate) is thinking about the physical cores in the server on a hardware level. As a user with a machine with 2 sockets and 16 cores each, when I allocate 2 sockets x 16 cores to the VM, I'm only giving that VM 50% of my hardware processing power since the VM only gets access to 32 threads instead of the full 64.
 
no you are not ;) 2 hyper threads are (by far!) not twice as fast as a single thread fully utilizing a core (the usual speed gain is somewhere around 30%), and a VM has other threads running as well anyway (e.g., for I/O, for the main loop, ..). and again - that panel is not about passing through host resources, it's about configuring the virtual hardware.

I can see the argument for adding the 'threads' option as advanced configuration knob (defaulting to 1, but allowing to expose "virtual hyperthreads" to the guest), but renaming the existing cores to threads doesn't make any sense at all..
 
Ok, but what spirit said still holds true...

1 vm core = 1 qemu thread.

On machines with hyperthreading it would make sense to pass through ALL THREADS of a core and not just a single thread if the goal of that config screen is to allow users to configure cores. It seems to me the developers made that screen without thinking about hyperthreading. And yes, I understand that IO and other tasks are also using threads on the host, but those threads are not available to the guest to use on compute processes.

So I still vote for modifying that config screen. If the idea is to pass through cores, you need to pass through both threads of the core, not just 1 thread.

Otherwise it should just say vCPUs and those can be defined however Proxmox wants to define them. But since you're saying you are passing through cores, it is confusing because the VM doesn't get full access to 100% of the processing power of the core since both threads are not getting passed through.
 
again - you are the one who thinks this setting passes through a (physical) host core, that's not what that setting says or does nor what the documentation says. it allows you to configure the number of virtual cores the guest sees as part of its virtual hardware (also known as vcpus). otherwise things like configuring multiple virtual sockets on a physically single socket system would be impossible ;)

I'll try to summarize the multiple angles involved:
- naming of the option "cores" - it does what it says on the tin, configure the virtual CPU cores, Qemu/libvirt also call this option "cores"
- non-exposure of the option "threads" - this is rarely useful, it just allows the virtual CPU topology to pretend there is HT going on
- limit of vCPUs (cores*sockets) per guest to physical logical core count (host sockets * cores * HTs) - exceeding this limit will just reduce performance by causing more context switches, so it will not be bumped
 
  • Like
Reactions: flames
I think you confuse CPU (hyper)threads and OS threads.

If the idea is to pass through cores
You don't pass through any CPU core or CPU thread. A virtual CPU core runs as separate OS thread on the host. Each OS thread then can be scheduled on a CPU core and/or CPU thread (depending on the CPUs capabilities).

The core vs. socket semantic is there for NUMA systems to allow more efficient usage of multi CPU-socket systems and or may be required by some (often licensing) software inside the VM.

Calling cores vCPUs would be confusing as it's not the total vCPUs, socket times cores are, so the existing name fits.
 
  • Like
Reactions: flames
I think you confuse CPU (hyper)threads and OS threads.


You don't pass through any CPU core or CPU thread. A virtual CPU core runs as separate OS thread on the host. Each OS thread then can be scheduled on a CPU core and/or CPU thread (depending on the CPUs capabilities).

The core vs. socket semantic is there for NUMA systems to allow more efficient usage of multi CPU-socket systems and or may be required by some (often licensing) software inside the VM.

Calling cores vCPUs would be confusing as it's not the total vCPUs, socket times cores are, so the existing name fits.

Maybe it's just me, but when I allocate resources to the VM, I do it with the context of how much I have available in the host. I don't want to over-allocate or under-allocate. The default screen isn't very informative. It leads me to believe I am allocating real host sockets and real host cores, not virtual ones. As a user, my expectation (even though I was wrong) was that if I allocated a core, that would pass through both threads of the core.

As a product manager of a SaaS product, I can tell you even though users are oftentimes wrong in their assumptions (like I was here), it makes sense to see why they are assuming things incorrectly. Sometimes very small changes can help users understand things better. I admit I am not a technical user nor a system admin, so I'm using Proxmox precisely because I wouldn't be able to use something like qemu on my own.


Screen Shot 2022-06-21 at 10.17.01 AM.png
 
  • Like
Reactions: Tamago4a

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!