I'm not a virtualization newbie but still have to learn a lot about Proxmox/KVM/QEMU. So please bear with me. After searching all kinds of wikis, forums and mailing lists I still have difficulties to understand if the following advanced configuration doesn't work because it's nonsense or because I simply do it the wrong way.
So here's the story:
My hardware setup is a dual socket Xeon E5620 (4 Cores / 8 Threads) server with 48GB RAM. Running 3 VMs. Two smaller ones with 4GB RAM and 4 cores each. And one larger one with 32GB RAM and 4 cores.
My (slightly limited) knowledge tells me it's a good idea to pin the two smaller VMs to the 1st CPU and the bigger VM to the 2nd CPU. But this only works if there is more than 32GB RAM available to the 2nd CPU. Otherwise the virtual RAM of 32GB needs to span two NUMA nodes. To solve this problem I equipped this dual socket server with 12+36GB RAM. Instead of using the traditional 24+24GB setup.
My problem now is that the output of "numactl -H" always looks like this. No matter which numactl or taskset commands I use.
I don't understand why the one big VM isn't consuming all of its 32GB RAM on NUMA-NODE1. If it would I guess I could solve another problem more easily. Which is to keep all it's vCPU threads compute on NUMA-NODE1 and don't let them hop between NUMA-NODE0 and NUMA-NODE1.
I hope that you guys can help me to clarify. This topic is bothering me since days.
So here's the story:
My hardware setup is a dual socket Xeon E5620 (4 Cores / 8 Threads) server with 48GB RAM. Running 3 VMs. Two smaller ones with 4GB RAM and 4 cores each. And one larger one with 32GB RAM and 4 cores.
My (slightly limited) knowledge tells me it's a good idea to pin the two smaller VMs to the 1st CPU and the bigger VM to the 2nd CPU. But this only works if there is more than 32GB RAM available to the 2nd CPU. Otherwise the virtual RAM of 32GB needs to span two NUMA nodes. To solve this problem I equipped this dual socket server with 12+36GB RAM. Instead of using the traditional 24+24GB setup.
My problem now is that the output of "numactl -H" always looks like this. No matter which numactl or taskset commands I use.
Code:
root@pve1:~# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 8 9 10 11
node 0 size: 12005 MB
node 0 free: 5802 MB
node 1 cpus: 4 5 6 7 12 13 14 15
node 1 size: 36286 MB
node 1 free: 9318 MB
node distances:
node 0 1
0: 10 15
1: 15 10
I don't understand why the one big VM isn't consuming all of its 32GB RAM on NUMA-NODE1. If it would I guess I could solve another problem more easily. Which is to keep all it's vCPU threads compute on NUMA-NODE1 and don't let them hop between NUMA-NODE0 and NUMA-NODE1.
I hope that you guys can help me to clarify. This topic is bothering me since days.