NUMA and Performance

freebee

Member
May 8, 2020
62
2
13
40
Hi.
Some days ago I migrate virtual machines from one server to another.
One specific virtual machine warned me about the processor: is higher than usual.
At the first moment, I treated this like his environment. A simple number of cores boost.
Today I read a doc from nginx, saying to avoid NUMA on servers where is executed:
https://www.nginx.com/blog/optimizing-web-servers-for-high-throughput-and-low-latency/#313Hardware
And now I question myself if this can bring some degradation to VMS in general.
I use it just to update the resources without rebooting. In this specific virtual machine, the NUMA was disabled before migration.
My setup is HP and Lenovo servers with NUMA active (2 processors).
Best Regards.
 
if you have multiple physical processor, you want numa enable.

Each processor have direct access to half the memory of your server.

but they use a common bus (with a limited bandwidth) to reach memory of the other processor.

Numa, is just a logical group of memory/cpu, to tell to the os: this cpu have direct access to this memory, and the os will try to schedule process on the cpu with direct memory access.

without numa, you could have the worst case, where a cpu always use the remote memory.

you can use the "numastat" command on the proxmox host verify if all is working fine

Code:
# numastat 
                           node0           node1
numa_hit            767992839737    598973965731
numa_miss                      0               0
numa_foreign                   0               0
interleave_hit              3253            3196
local_node          767980317279    598970550923
other_node              12522458         3414807

if numa_miss is low, this is ok.


Enabling numa of the vm , add auto scheluding of vm cores to physical cores. So , you can also enable it.
 
In my case is high:
numa_hit 7111758603
numa_miss 369622021
numa_foreign 420224150
interleave_hit 1378
local_node 7108845048
other_node 372535576

you said "add auto scheluding of vm cores to physical cores.".
How I can do this on proxmox ?. In GUI I don't see this option.
 
In my case is high:
numa_hit 7111758603
numa_miss 369622021
numa_foreign 420224150
interleave_hit 1378
local_node 7108845048
other_node 372535576

so, you have around 5% miss. It's not so bad.
you said "add auto scheluding of vm cores to physical cores.".
How I can do this on proxmox ?. In GUI I don't see this option.
juste enable nume checkbox in the vm cpu advanced option.

It's also possible to manually do numa pining (see documentation), editing vm configuration file:

Code:
numa[n]: cpus=<id[-id];...> [,hostnodes=<id[-id];...>]
[,memory=<number>] [,policy=<preferred|bind|interleave>]
NUMA topology.
cpus=<id[-id];...>
CPUs accessing this NUMA node.
hostnodes=<id[-id];...>
Host NUMA nodes to use.
memory=<number>
Amount of memory this NUMA node provides.
policy=<bind | interleave | preferred>
NUMA allocation policy
 
Feedback: The problem with performance really exists. It was because of the kernel version. When updated to .15 everything is back to normal.
It is not NUMA-related.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!