Enable or Disable Hyperthreading?

e100

Renowned Member
Nov 6, 2010
1,268
46
88
Columbus, Ohio
ulbuilder.wordpress.com
I am planning on assembling a system with dual cpus:
E5-2650 Sandy Bridge 8 core w/Hyperthreading

Some people recommend that hyperthreading should be disabled because it can degrade performance.
Other people recommend that it should be enabled because in most instances it improves performance.
I have even run across people suggesting to benchmark your application to find out if HT helps or not, but since I will be running various VMs where the application is likely to change frequently that suggestion is impractical.

I found an article where someone actually ran benchmarks in KVM to find out, but it is a little old:
http://www.phoronix.com/scan.php?page=article&item=linux_kvm_scaling&num=1

Do you have a real world experience that you can share?
Did you get better performance with or without HT?
 
You'll want it on (Especially on an E5). Its not vCPU's when Hyper-threading is on. It takes the pipeline and splits it allowing to to process two thread per core. Its way better than when it first came out.

/b
 
for me, I always disable hyperthreading. (I don't wan't to have 2 vcpus on a single physical core.)
Don't know for last intels generation, maybe it's working better now.

I know that the first gen of hyperthreading often caused issues, but it seems that newer CPUs have a much better implementation.

When you mention not wanting two virtual cpus on one core, do you mean that you do not want two KVM virtual cpus to get scheduled on the same physical core, one of which would be the hyperthreading core?
If so, cpu pinning seems like a good solution to that problem.
 
I know that the first gen of hyperthreading often caused issues, but it seems that newer CPUs have a much better implementation.

When you mention not wanting two virtual cpus on one core, do you mean that you do not want two KVM virtual cpus to get scheduled on the same physical core, one of which would be the hyperthreading core?
If so, cpu pinning seems like a good solution to that problem.
Yes, exactly.
Are the performance equal if the guest 2 vcpus are scheduled on 2 real core, or 1core with hyperthreading ?
If not,This can give us random performance difference in the guest.
 
Are the performance equal if the guest 2 vcpus are scheduled on 2 real core, or 1core with hyperthreading ?
If not,This can give us random performance difference in the guest.

That is one concern I have, I might have a particular KVM VM where I need all the performance I can get, I do not want things scheduled in the slightly less performing hyperthreading cores.
For that I am thinking that using cpu affinity (pinning) on the KVM process would be a good solution.

This could possibly be very useful on multiple socket machines too.
You would not want the kernel scheduling execution on cpu #1 core #5 one moment, then cpu #2 core #1 the next moment.
Having the kernel always schedule a particular virtual cpu on a particular cpu socket and core *should* perform better since the same set of cpu cache is used for that virtual cpu all the time rather than being scheduled where the cache is likely stale.
If you have two or more virtual cpus assigned to a single KVM process, pinning them to a set of cores that share the same caches should perform better than bouncing around to cores where the cache will be stale for that VM.

Maybe it is best to just let the kernel schedule however it thinks best rather than pinning too, maybe some benchmarks will help answer this.
 
I'm running Proxmox with multiple KVM machines on E5-2690 with HyperThreading enabled. No issues for now.

Michu
 
That is one concern I have, I might have a particular KVM VM where I need all the performance I can get, I do not want things scheduled in the slightly less performing hyperthreading cores.
For that I am thinking that using cpu affinity (pinning) on the KVM process would be a good solution.
Quoting our vSphere guru at my job. Before Sandy bridge it was a nogo with HT and only if you could not avoid it it was important to pind vCPU to the same core because of context switching overhead. With the new memory and bus architecture on Sandy bridge these problems are gone. Eg. on VmWare before Sandy bridge it was recommended to disable HT and on Sandy bridge it is actually recommended to have HT activated due to its optimized numa architecture. http://www.qdpma.com/systemarchitecture/systemarchitecture_sandybridge.html
 
  • Like
Reactions: 1 person
Quoting our vSphere guru at my job. Before Sandy bridge it was a nogo with HT and only if you could not avoid it it was important to pind vCPU to the same core because of context switching overhead. With the new memory and bus architecture on Sandy bridge these problems are gone. Eg. on VmWare before Sandy bridge it was recommended to disable HT and on Sandy bridge it is actually recommended to have HT activated due to its optimized numa architecture. http://www.qdpma.com/systemarchitecture/systemarchitecture_sandybridge.html

Thank you for this information, this is exactly the sort of information that people need to know.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!