Hi everyone,
I have a host with 2 sockets (8 pcores/each, 16 cores/each with HT) with Proxmox 5.4.13 installed. Each Processor has 32 GB of RAM memory. I have "NUMA" enabled in all VMs.
When I run numastat, the results are not the expected ones. I get too many numa_miss events:
I have four VM:
- 1 Windows Server with 8 GiB (8192 MiB) and 2 processors (1 socket, 2 cores) (pid 2235)
- 1 Windows Server with 16 GiB (16384 MiB) and 4 processors (2 socket, 2 cores) (pid 2562)
- 1 Windows Server with 32 GiB (32768 MiB) and 8 processors (2 sockets, 4 cores) (pid 4819)
- 1 Windows 10 with 8 GiB (8192 MiB) and 2 processors (1 socket, 2 cores) (pid 21910)
When running numastat to check the memory usage per VM, I get unbalanced results for all the VMs. Single socket VMs do not use memory only from a single node and dual socket VMs do not divide memory across nodes equally. For example, for the VM with 32GiB (pid 4819), the results should be (aprox.) 16000 (node 0) and 16000 (node 1), but they are 12000 (node 0) and 20000 (node 1).
Questions:
1) Is there anything else to configure?
a) Maybe the memory in the following VM options?
numa[n]: cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>]
I guess the policy would be preferred
b) tasksel? It seems it is only for CPU, not for memory
2) If you force Proxmox to use a particular numa node for a VM, you lose flexibility in terms of schedulling, right? Any negative implication?
3) Can I use numad in Proxmox? http://www.admin-magazine.com/Archive/2014/20/Best-practices-for-KVM-on-NUMA-servers
Thanks in advance.
I have a host with 2 sockets (8 pcores/each, 16 cores/each with HT) with Proxmox 5.4.13 installed. Each Processor has 32 GB of RAM memory. I have "NUMA" enabled in all VMs.
When I run numastat, the results are not the expected ones. I get too many numa_miss events:
Code:
root@pve1:~# numastat
node0 node1
numa_hit 83133278 66176090
numa_miss 6891218 19498691
numa_foreign 19498691 6891218
interleave_hit 29932 29641
local_node 83132082 66142957
other_node 6892414 19531824
I have four VM:
- 1 Windows Server with 8 GiB (8192 MiB) and 2 processors (1 socket, 2 cores) (pid 2235)
- 1 Windows Server with 16 GiB (16384 MiB) and 4 processors (2 socket, 2 cores) (pid 2562)
- 1 Windows Server with 32 GiB (32768 MiB) and 8 processors (2 sockets, 4 cores) (pid 4819)
- 1 Windows 10 with 8 GiB (8192 MiB) and 2 processors (1 socket, 2 cores) (pid 21910)
When running numastat to check the memory usage per VM, I get unbalanced results for all the VMs. Single socket VMs do not use memory only from a single node and dual socket VMs do not divide memory across nodes equally. For example, for the VM with 32GiB (pid 4819), the results should be (aprox.) 16000 (node 0) and 16000 (node 1), but they are 12000 (node 0) and 20000 (node 1).
Code:
root@pve1:~# numastat -c kvm
Per-node process memory usage (in MBs)
PID Node 0 Node 1 Total
--------------- ------ ------ -----
2235 (kvm) 1283 6926 8210
2238 (kvm-nx-lpa 0 0 0
2323 (kvm-pit/22 0 0 0
2562 (kvm) 10984 5374 16357
2564 (kvm-nx-lpa 0 0 0
2611 (kvm-pit/25 0 0 0
4819 (kvm) 12377 20433 32810
4821 (kvm-nx-lpa 0 0 0
4956 (kvm-pit/48 0 0 0
21910 (kvm) 5909 2314 8222
21912 (kvm-nx-lp 0 0 0
21954 (kvm-pit/2 0 0 0
--------------- ------ ------ -----
Total 30553 35046 65600
Questions:
1) Is there anything else to configure?
a) Maybe the memory in the following VM options?
numa[n]: cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>]
I guess the policy would be preferred
b) tasksel? It seems it is only for CPU, not for memory
2) If you force Proxmox to use a particular numa node for a VM, you lose flexibility in terms of schedulling, right? Any negative implication?
3) Can I use numad in Proxmox? http://www.admin-magazine.com/Archive/2014/20/Best-practices-for-KVM-on-NUMA-servers
Thanks in advance.