NUMA questions again

cwiggs

Member
Aug 7, 2021
5
1
8
74
I've read the NUMA wiki here: https://pve.proxmox.com/wiki/NUMA and the admin guide regarding NUMA here: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings however I still have some questions.

* Is NUMA only useful if your PVE *host* has more than 1 physical CPU?
* The documentation says that if you enable NUMA you should set your VM with NUMA enabled to have the same amount of vCPUs as your host as CPUs. That seems problematic if I don't want a VM to use all the CPU resources of the host? e.g. I have 4 physical CPUs, and I set the VM to have 4 vCPUs and the VM uses 100% of the CPU resources it could negatively effect the host and/or other VMs.
* The documentation says that NUMA is required for hot-pluggable CPU and Memory. If your hardware doesn't support NUMA does that mean that you cannot have hot-pluggable CPU and Memory, or should you still enable NUMA to get hot-pluggable CPU and Memory, but not get the other benefits of NUMA?

Thank you.
 
* Is NUMA only useful if your PVE *host* has more than 1 physical CPU?
You might want to use NUMA even with just a single socket in case you are running a multi chiplet CPU like a Ryzen 5900X.
 
Last edited:
It says that it is recommended to set the number of sockets for the VM to the amount of NUMA nodes (hardware CPU sockets) you have.
This is not the same as vCPUs.

Ah, you are right, I misunderstood. So if the host has 2 physical CPUs then I should set the VM to NUMA with 2 physical sockets as well, but the vCPUs can be anything I want?

You might want to use NUMA even with just a single socket in case you are running a multi chiplet CPU like a Ryzen 5900X.
Ah interesting. Does numactl show more than 1 "node" on a 5900x platform then?
 
  • Like
Reactions: gurubert
Ah interesting. Does numactl show more than 1 "node" on a 5900x platform then?
Usually it's seen as 1 node but I (5950X) still have to experiment with the BIOS setting that shows both CCDs as different nodes. The memory latency is the same but the L3 cache is per CCD. The Linux scheduler get better about this with newer kernel versions, so I don't know if it's worth the trouble. Also, one CCD is typically from a fast bin and the other from a slow bin, so pinning cores might not be the best option overall. Someone please correct me if I'm wrong about any of this.
 
  • Like
Reactions: gurubert and cwiggs
The only question I think I don't have answered is:

* The documentation says that NUMA is required for hot-pluggable CPU and Memory. If your hardware doesn't support NUMA does that mean that you cannot have hot-pluggable CPU and Memory, or should you still enable NUMA to get hot-pluggable CPU and Memory, but not get the other benefits of NUMA?

Anyone know about this? I don't have multiple CPU sockets but I've been enabling NUMA and not sure if I should or not.
 
The only question I think I don't have answered is:

* The documentation says that NUMA is required for hot-pluggable CPU and Memory. If your hardware doesn't support NUMA does that mean that you cannot have hot-pluggable CPU and Memory, or should you still enable NUMA to get hot-pluggable CPU and Memory, but not get the other benefits of NUMA?

Anyone know about this? I don't have multiple CPU sockets but I've been enabling NUMA and not sure if I should or not.
It's fine. There are no issues with enabling the VM NUMA option on a single socket (or other UMA system).
 
  • Like
Reactions: cwiggs

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!