CPU Performance Degradtion

Hi,could you describe how do you enable TLB ?
Transparent hugepage is related, seem strange that it doesn't work.

I dug into this a bit more. Looks like I don't need hugetlbfs, the key I was missing when enabling huge pages was simply reserving some. If I only enable huge pages it doesn't seem to help, but if I reserve some that is the key I was missing.

If I do just this.

echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/redhat_transparent_hugepage/enabled

It won't help, but if I do this.

echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo 296 > /proc/sys/vm/nr_hugepages

It makes all the difference in the world.
 
I dug into this a bit more. Looks like I don't need hugetlbfs, the key I was missing when enabling huge pages was simply reserving some. If I only enable huge pages it doesn't seem to help, but if I reserve some that is the key I was missing.

If I do just this.

echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/redhat_transparent_hugepage/enabled

It won't help, but if I do this.

echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo 296 > /proc/sys/vm/nr_hugepages

It makes all the difference in the world.
Hi,
do you also mount the hugetablefs like discribed here: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt ? - just see it's probable not nessesary
Code:
no such mount command is required if the applications are going to use only shmat/shmget system calls or mmap with MAP_HUGETLB.
It's not clear for me, how the VM takes the hugepages! (and how much each VM get)
Another question: If I enable and use hugepages and use livemigration, does the migration fails if on the destination node not enough huge pages free??

Udo
 
Last edited:
Hi,
do you also mount the hugetablefs like discribed here: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt ?
It's not clear for me, how the VM takes the hugepages! (and how much each VM get)
Another question: If I enable and use hugepages and use livemigration, does the migration fails if on the destination node not enough huge pages free??

Udo

I am still testing, but so far it seems I only have to enable and reserve the huge pages. So far I am not seeing a need to mount the hugetablefs. Live migration is working, but it is pretty much pointless being bottle necked by ssh.
 
I dug into this a bit more. Looks like I don't need hugetlbfs, the key I was missing when enabling huge pages was simply reserving some. If I only enable huge pages it doesn't seem to help, but if I reserve some that is the key I was missing.

If I do just this.

echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/redhat_transparent_hugepage/enabled

It won't help, but if I do this.

echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo 296 > /proc/sys/vm/nr_hugepages

It makes all the difference in the world.

Hi adamb and everyone.

I will be very grateful if you can help me.

I have 256 GB. RAM in a Server, for use it only with PVE and a Win2008R2 VM that will have also MS-SQL-Server.

The distribution of the RAM of this Server will be so:

- 8 GB. RAM for PVE
- 8 GB. RAM for the Win2008R2 VM
- 240 GB. RAM for the MS-SQL Server

Asking apologize if my questions are of a rookie, let me ask you a few questions:
1) What values should i add for gain speed in the huge pages?
2) How to do for that the changes be permanent after of a restart (host and/or guest that is in HA)
3) A suggestion yours says: "echo 296 > /proc/sys/vm/nr_hugepages", and i don't understand why you choose the 296 number.
4) If is possible explain in detail, for me will be better (i will try to understand you)

Best regards
Cesar

Re-Edited:
I have now my PVEs upgraded, and it show me this values (without my manual intervention and with the VM turned off):
shell> uname -a
Linux pve2 3.10.0-5-pve #1 SMP Wed Oct 15 08:03:00 CEST 2014 x86_64 GNU/Linux

shell> cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

shell> cat /proc/sys/vm/nr_hugepages
0

shell> cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
cat: /sys/kernel/mm/redhat_transparent_hugepage/enabled: No such file or directory

Other commands (while that the VM is turned off)

shell> free -h
total used free shared buffers cached
Mem: 251G 2.9G 248G 0B 40M 282M
-/+ buffers/cache: 2.6G 249G
Swap: 19G 0B 19G

shell> numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
node 0 size: 131026 MB
node 0 free: 127159 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
node 1 size: 131072 MB
node 1 free: 127813 MB
node distances:
node 0 1
0: 10 20
1: 20 10
 
Last edited:
If I can help,

I can provide some test patches to test auto numabalancing with qemu 2.1 and kernel 3.10.

Any help will be welcome

The problem is that in few days the server will be in production.

Maybe this PDF document help us with auto balacing of numa and automatic huge pages (supported in the kernel):
https://access.redhat.com/documenta...ation_Tuning_and_Optimization_Guide-en-US.pdf

Or does PVE restrictions, and these features will not work? (numa and huge pages in automatic mode)
 
Last edited:
Any help will be welcome

The problem is that in few days the server will be in production.

Maybe this PDF document help us with auto balacing of numa and automatic huge pages (supported in the kernel):
https://access.redhat.com/documenta...ation_Tuning_and_Optimization_Guide-en-US.pdf

Or does PVE restrictions, and these features will not work? (numa and huge pages in automatic mode)

I have sent patch to dev mailing list.

(qemu need to be patched too, because currently numa support is not enabled)
 
Hi adamb and everyone.

I will be very grateful if you can help me.

I have 256 GB. RAM in a Server, for use it only with PVE and a Win2008R2 VM that will have also MS-SQL-Server.

The distribution of the RAM of this Server will be so:

- 8 GB. RAM for PVE
- 8 GB. RAM for the Win2008R2 VM
- 240 GB. RAM for the MS-SQL Server

Asking apologize if my questions are of a rookie, let me ask you a few questions:
1) What values should i add for gain speed in the huge pages?
2) How to do for that the changes be permanent after of a restart (host and/or guest that is in HA)
3) A suggestion yours says: "echo 296 > /proc/sys/vm/nr_hugepages", and i don't understand why you choose the 296 number.
4) If is possible explain in detail, for me will be better (i will try to understand you)

Best regards
Cesar

Re-Edited:
I have now my PVEs upgraded, and it show me this values (without my manual intervention and with the VM turned off):
shell> uname -a
Linux pve2 3.10.0-5-pve #1 SMP Wed Oct 15 08:03:00 CEST 2014 x86_64 GNU/Linux

shell> cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

shell> cat /proc/sys/vm/nr_hugepages
0

shell> cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
cat: /sys/kernel/mm/redhat_transparent_hugepage/enabled: No such file or directory

Other commands (while that the VM is turned off)

shell> free -h
total used free shared buffers cached
Mem: 251G 2.9G 248G 0B 40M 282M
-/+ buffers/cache: 2.6G 249G
Swap: 19G 0B 19G

shell> numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
node 0 size: 131026 MB
node 0 free: 127159 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
node 1 size: 131072 MB
node 1 free: 127813 MB
node distances:
node 0 1
0: 10 20
1: 20 10


1. I found the settings I provided worked best for me.
2. I put them in /etc/rc.local
3. This value is what worked best for me in testing
4. Im not sure what to explain in detail, the issue at hand, or how it was solved?
 
1. I found the settings I provided worked best for me.
2. I put them in /etc/rc.local
3. This value is what worked best for me in testing
4. Im not sure what to explain in detail, the issue at hand, or how it was solved?

Hi adamb.

Many thanks for your reply, and now i know that with the kernel 3.10, PVE has automatic and online huge pages enabled.

As a reference about of this support in the RedHat kernel (between other things), see this web link:
https://access.redhat.com/documenta...ation_Tuning_and_Optimization_Guide-en-US.pdf

Maybe you want apply it in your server and tell us about of his performance, that i can't tell you because yet isn't installed MS-SQL server.

Other point interesting is the NUMA auto-balancing, that i know that in this moment Spirit and Dietmar are talking of how apply it in PVE (see it in the PDF document).

Best regards
Cesar
 
1. Should be usefull for any vm with big memory (doc said around 15-20% more cpu performance).
2. on the host

Hi Spirit

In this link Redhat says that THP (transparent Hugepages) is not recommended for database workloads.
https://access.redhat.com/documenta...formance_Tuning_Guide/s-memory-transhuge.html

Have you tested with some database? (for example MS-SQl-Server)

Very soon, i will do some tests with MS-SQL-Server by disabling THP. If you want to know the results of my tests, please just tell me.

Best regards
Cesar
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!