[SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

wahmed

Famous Member
Oct 28, 2012
1,095
39
113
Calgary, Canada
www.symmcom.com
After reading countless discussions, articles, whitepapers and not finding suitable answer, I decided to take matter in hand and do some testing to come to conclusion, if i should be using Hyper-Threading or not. While trying to find this answer, i also found the answer of using Fixed or Variable Memory for VMs in Proxmox.
[[Disclaimer: These are all based on my testing and configuration so my opinion may be biased towards my virtual environment]]

Short Answer
-YES!! Use Hyper-Threding whenever possible, even on a newer multi CPU node.
-Use Fixed memory allocation in VMs where performance is needed. Do Not use variable memory specially on Windows based VM.

Long Answer
Hardware used for testing
-------------------------
Node 1 = Dual Xeon E5-5620v2, 64GB Ram, Hyper-Threading Disabled, Total core 12(12+0 HT)
Node 2 = Dual Xeon E5-5620v2, 64GB Ram, Hyper-Threading Enabled, Total core 24(12+12 HT)

Initial VMs used for testing
--------------------------
2 x Windows Server 2008 R2 64 Bit - 6 vCPU, Max 16GB Ram Min. 4GB Ram

In my environment both Windows Servers are RDS server. Each with about 20 users. I used Performance monitor in Windows to gather performance data. No other tool was used. Initially i put both servers on Node 2 but we have been some performance issue for some time. I did not want to believe my hardware was not capable enough to run just 2 RDS servers. Performance monitor showed RAM was not even used up half way but CPU consumption were above 80% during work hours, sometimes hitting 100% and staying there. The RDS servers only have standard office programs such as MS Office, Quickbooks, etc. No graphic heavy work not even youtube. To make sure that it was not the hardware problem, i moved one of the RDS server to Node 1 which had HT enabled at that time. No noticeable difference really. To find out it wasnt the HT causing issue, i disabled HT on the first node, but the performance on that VM actually got worse. Rebooted the VM with HT enabled and it got better.

Trying many difference settings such as:
-increasing/decreasing vCPU, memory
-fixing/increasing/decreasing windows pagefile
-putting VMs on local disk rather than RBD

Lastly, i rebooted the VM without any other user logged in or program running. But the performance monitor showed about 45% CPU were getting consumed on both VMs. The VMs on Node 1 with HT disabled was actually showing slightly higher consumption. In all cases memory consumption were way down.
Finally, i disabled variable memory and fixed it 16GB. After rebooting the VM, performance monitor showed..................... 1% consumption!!! Did the same setting to HT disabled node VM, same result. CPU was quiet! After logging all users with RDP and running programs to simulate maximum load in all their session the consumption went up to 49% on HT enabled node and 61% on HT disabled node. I was even able to drop the vCPU count to 2 and make it work just fine. But since it is a RDS server and user might cause spike sometime, i changed it to 4 vCPU.

I configured fixed memory on ALL the VMs both windows and linux in the cluster and were able to reclaim 32 vCPUs!!!! The VMs that were running with high CPU usage given the nature of their role now churning away just fine with lower vCPU count.
I should mention that in our clusters we keep very close eye on vCPU allocation so we do not over allocate or abuse HT enabled nodes. So the vCPUs that were assigned previously were needed to fulfill consumption. With all Windows VMs memory fixed i did HT or No HT test again. Same VM seemed happier on HT enabled node than disabled one. Again, all these from performance monitor data.
Even with lower vCPU count and fixed memory our RDP users are much happier, as per the reports we have getting all day. I monitored the clusters for last 2 days, vCPUs are much quieter now with fixed memory. Just to make sure fixed memory is the one made the difference, i changed one of the VM to old setting with variable memory and the same old high vCPU consumption came back. Changed it back to fixed, consumption went down significantly.

So the question to all, is it normal behavior? Anybody else had or noticed similar trend? I found many articles with talk of HT or No HT. Some said disable HT for multi CPU nodes. Some said just leave it on. I am sure many of you tried to find the same answer. My finding says leave it on. I am still monitoring the cluster. So far things looks much better on HT enabled than disabled.

Also why fixed memory will consume less vCPU than variable? Although things are working pretty excellent with these changes, i would like to know "why" and possibly other experiences.
 
Last edited:
  • Like
Reactions: Thorvi
Re: Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

Your assumptions is only valid for virtualized windows. For linux and xBSD there is no performance penalty for using variable memory.

Regarding HT. HT should only be used if chipset >= sandy bridge which is also the recommendation from vmware. The reason is that as of sandy bridge intel introduced a new numa architecture which allows allows sharing of cache between all cores. Before sandy bridge cache was divided in chunks which were assigned specific cores and could not be shared freely between cores so provided HT was active you could end up having HT-cores assigned to a specific VM which was not belonging to the same physical core(s) in which case performance dropped remarkedly since cached memory had to be exchanged between the HT-cores over the data bus. Fetching memory through the data bus is an order of magnitude slower than to be able to directly access memory from cache.
 
Re: Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

Your assumptions is only valid for virtualized windows. For linux and xBSD there is no performance penalty for using variable memory.
I did some further testing with Linux VMs. It appears that the VM which has ClearOS (redhat based) are much quieter now with fixed memory. FreeBSD based VMs made no difference. But in any case, Linux VMs are not that resource hog in my environments to make much difference. Windows on the other hand made huge difference. :)

Regarding HT. HT should only be used if chipset >= sandy bridge which is also the recommendation from vmware.
E55XX was the last generation without numa?

Does this fixed vs. variable memory laso applies to windows vm on AMD based CPU? Of course HT issue does not matter since AMD dont have HT.
 
Re: Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

E55XX has numa, just the one with the mentioned "bug".

Regarding windows on AMD I cannot tell since I don't run any windows server here. Maybe spirit can answer since I know he is running a number of Opterons in production.
 
Re: Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

E55XX has numa, just the one with the mentioned "bug".

Regarding windows on AMD I cannot tell since I don't run any windows server here. Maybe spirit can answer since I know he is running a number of Opterons in production.

Hi don't use auto balloning for linux or windows. (But I'm using manual balloning with shares=0, then change the min memory).

Indeed windows balloon driver can be cpu hungry (I think it's have improve in last driver version).
https://github.com/YanVugenfirer/kv...mmit/afe4a18c6fed9d4b32a214db1fb87ef29f3bd8e0
 
Re: Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

Thanks spirit and mir.
This ha been a real eye opener. Have been using VM for long time but never considered ballooning might have been causing high cpu usage. For now I have disabled balloon memory on all Windows VMs and on all critical Linux VMs. Our users continues to provide positive feedback. We have noticed significant performance improvements on Windows Remote Desktop Servers.
 
Re: Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

Hi
Base on what I see on this thread, I am thinking of adding a note to our Windwos Best practises article, to warn the user not to use the ballooning driver on windows.
Any comments on that ?
 
I think the ballooning issue still applies to Windows. It does impact all Windows VMs. We went back to ballooning for all Linux VMs. No issues at all.

Sent from my SM-N910W8 using Tapatalk
 
I make zoobido words mine. We should still avoid using ballon drvier on Windows guets?
 
Would be interested to know if the performance impact of the VirtIO Balloon Driver still exists? Any newer findings?
One of the newer youtube videos in your channel recommends installing the driver to have better control of the used host memory by the vm.
 
I'd like to know too if ballooning is still causing high cpu usage.

Maybe updated drivers has fixed this issue ?
 
Re: Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

Hi
Base on what I see on this thread, I am thinking of adding a note to our Windwos Best practises article, to warn the user not to use the ballooning driver on windows.
Any comments on that ?


Added to http://pve.proxmox.com/wiki/Performance_Tweaks the recommandation note to use the balloon driver on windows guest

Now I'm confused, in "Windows 2012 guest best practices" it's encouraged to use the "Automatically allocate memory" option while in "Performance Tweaks" it clearly advises you not to. Moreover in "Dynamic Memory Management" one can see howto install Balloning in Windows VM. I just installed Proxmox 5.0 and I'm about to create a Win2012R2 for AD role. Please I need to know which suggestion should I follow.
 
memory is cheap, memory is plenty, go for fixed

btw for install a domain controller i recommend 2 gb ram, when you run it you can use 1gb without an issue, if youre really short then it will run on 500megs (install requires minimum of 756 but 2 installs way faster :))

as for me , i never trustet balloning in kvm, and i think it never really worked for me but i cant say for 100% shure because the bluescreens i had where cured with a ballooning off AND install new drivers... but if i had to guess its balooning resposible for most of the blues

as from the practical side, i doubt its testet enough to always exclude it from the possible error list
so alone for debugging purposes i leave it off, ebcause that one you wont look after if the machine crashes in a few weeks or months
 
sidenote, only because someone names it best practise doesnt mean it is.
balooning is mainly for overprovisioning your host.

this might be good for special puposes, but for regular working vms, lets say you have 2 webserver, 2 databse server, one mail, some windows... none of those you really wanna overcommit...

besides most systems hould have a stable ram usage anyway so fixed it is
 
I assume that since this practice is written on proxmox own wiki, there should be taken into consideration. I'm new to using Proxmox, therefore I thought about follow those suggestions as they are suppose to be there to help newbies like me. Nevertheless, I'm open to other suggestions. Before stumbling with this contradiction, I'd already set up my Win2012R2 KVM for AD/DNS/DHCP purposes with:
  1. HDD - Size 50 GB - Bus/Device VirtIO - Cache Write back - None of the checkbox enabled - Format Raw
  2. CPU - 1 Socket - 2 Cores - No NUMA
  3. RAM - Automatically allocate memory range from 1024MB to 2048MB - Shares 1000
  4. Network VirtIO - Bridged - Multiqueue 4
I'm testing Proxmox 5.0 on Acer h81h3-am with 8GB RAM CPU(s) 4 x Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz and 1TB Seagate SATA. Should everything goes smoothly, I'm going to dismantle the ESXi 5.5 on my HP ProLiant DL360p Gen8 8 SFF and migrate everything (I'm planning to set up more VMs) there. Any suggestions are welcomed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!