CPU pinning?

Jamie

Active Member
May 18, 2018
22
1
41
28
I have recently created a gaming vm and a workstation vm both running windows 10.

host
-------------------------
AMD 3950x 16 cores/32 threads


vm1 (workstation)
---------------------
20 threads


vm2 (Gaming VM)
---------------------
10 threads


I have noticed that the threads that I pass through are not dedicated to their vms. In unraid, I can pin cpu threads where a vm can only use its pinned threads and no other. Is this possible with proxmox?

I have seen taskset, is that the only way to do this sort of thing? If so how do I determine which are physical and logical threads?
 
Last edited:
You should be also able to set the numa0, numa1, ... options for the VM:

--numa[n] cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>]
-- man qm ( https://pve.proxmox.com/pve-docs/qm.1.html )

This is not exposed in the webinterface but can be done with, e.g., qm set VMID --numa0 cpus=1;2;3-5,... --numa1 ...
 
I don't quite understand NUMA even after reading some documentation.

My host cpu does not have numa nodes so by enabling numa=1 in the vm then setting numa cpus to whatever Ex: --numa0 cpus=0-19
this is for 20 threads.

What does this actually do if anything?

My goal here is to have two vms one with 20 threads and the other with 12 threads. I would like for these vms not to share threads and to dedicate threads to each vm.
 
Ah, okay, then you can really use taskset as wolfgang proposed. For example, the following command would pin the VM with ID 104 to CPU 0 to 11

Bash:
taskset --cpu-list  --all-tasks --pid 0-11 "$(< /run/qemu-server/104.pid)"

A hookscript which would do this on every VM start and have the cpu set definition saved in a file "/etc/pve/qemu-server/$vmid.cpuset" (for more flexibility) could be done with:
Bash:
#!/bin/bash

vmid="$1"
phase="$2"

if [[ "$phase" == "post-start" ]]; then
    main_pid="$(< /run/qemu-server/$vmid.pid)"

    #cpuset="0-11"
    cpuset="$(< /etc/pve/qemu-server/$vmid.cpuset)"

    taskset --cpu-list  --all-tasks --pid "$cpuset" "$main_pid"
fi

copy that over to a PVE storage and set it as hook script, e.g., for "local" do:

Bash:
mkdir /var/lib/vz/snippets
cp taskset-hook.sh /var/lib/vz/snippets
chmod +x  /var/lib/vz/snippets/taskset-hook.sh

qm set VMID --hookscript local:snippets/taskset-hook.sh

Note most of that was written from top of my head, so it may have some small issues which still need to get ironed out.
 
I have a promox server running on a 2 cpu system, I've enabled numa in the web interface and set it too 2 sockets 10cores each. This displays in the windows vm properly and has been working fine. I can even see that the memory is allocated properly in the windows vm with each node having half the ram. My problem comes when I try gaming. I also have an RX580 passthrough that is working fine in every regard but at about 50% performance. On the physical machine the RX580 is attached to the PCIe bus of the 2nd CPU. Is there a way in windows 10 to see what numa node it thinks the GPU is attached too? I have a theory that the GPU is accessing ram across the nodes but can't nail it down. Not meaning to hijack a thread here. but I think the answer already lies in here somewhere if I could apply it.
 
can I do "qm set VMID --numa0 cpus=1;2;3-5,... --numa1 ." on a live system?
is it a persistent change?
If I can't change what windows thinks the GPU is on maybe I can invert the nodes to the guest?
 
Sorry for the necrobump, but I think this is relevant for the discussion.

taskset will bond that process to the specified CPUs so they don't get executed by different threads. But that doesn't necessarily mean that those CPUs will be dedicated to that process (i.e. other VMs that haven't been CPU-pinned or even the host process will still be using your pinned CPUs).

Are there any solutions to address that specific scenario on Proxmox (or Linux in general)?
 
  • Like
Reactions: Tmanok
Ah, okay, then you can really use taskset as wolfgang proposed. For example, the following command would pin the VM with ID 104 to CPU 0 to 11

Bash:
taskset --cpu-list  --all-tasks --pid 0-11 "$(< /run/qemu-server/104.pid)"

A hookscript which would do this on every VM start and have the cpu set definition saved in a file "/etc/pve/qemu-server/$vmid.cpuset" (for more flexibility) could be done with:
Bash:
#!/bin/bash

vmid="$1"
phase="$2"

if [[ "$phase" == "post-start" ]]; then
    main_pid="$(< /run/qemu-server/$vmid.pid)"

    #cpuset="0-11"
    cpuset="$(< /etc/pve/qemu-server/$vmid.cpuset)"

    taskset --cpu-list  --all-tasks --pid "$cpuset" "$main_pid"
fi

copy that over to a PVE storage and set it as hook script, e.g., for "local" do:

Bash:
mkdir /var/lib/vz/snippets
cp taskset-hook.sh /var/lib/vz/snippets
chmod +x  /var/lib/vz/snippets/taskset-hook.sh

qm set VMID --hookscript local:snippets/taskset-hook.sh

Note most of that was written from top of my head, so it may have some small issues which still need to get ironed out.
Hi I am a computer noob...
I am using i5-10600K, which has 6cores and 12 threads.
If I want to pin 4 cores to the VM with ID 104,
For the taskset-hook.sh document, does that mean I only need to change the line #cpuset="0-11" to #cpuset="0-3"?
Thankss.
 
I am also looking for a way to bind my vm to dedicated physical CPU. The help document of PVE for NUMA is hard to understand.
 
can I do "qm set VMID --numa0 cpus=1;2;3-5,... --numa1 ." on a live system?
is it a persistent change?
If I can't change what windows thinks the GPU is on maybe I can invert the nodes to the guest?
I have the same setup...did you ever figure this out?
 
Sorry for the necrobump, but I think this is relevant for the discussion.

taskset will bond that process to the specified CPUs so they don't get executed by different threads. But that doesn't necessarily mean that those CPUs will be dedicated to that process (i.e. other VMs that haven't been CPU-pinned or even the host process will still be using your pinned CPUs).

Are there any solutions to address that specific scenario on Proxmox (or Linux in general)?
You could theoretically manage each of your VMs in taskset, but ideally the CPU scheduler is already noticing when a core is occupied. However, having to pin each of your VMs and keep track of where each VM is pinned is a very manual task, made worse when you begin to involve Hyperconverged storage, firewall rules, and other tasks that take up significant CPU cycles. If PVE had a solution that enabled a checkbox for "CPU Pinning" with the understanding that the system would pin the specified VM to a set of CPUs and try not to run other VMs or processes on those "reserved" CPUs, then that would be excellent. One step further would be to allow specifying which cores to pin. One step further yet would be a warning admins when overlapping CPU reservations are configured. One final step further would be disabling CPU pinning when a VM had to be moved to another host to avoid overlap.

The trouble is, if your host has say 8 cores, and you pin the CPUs of 3x VMs which use 2 cores each, and a fourth VM is run on those cores, you've just over-committed your resources and forced the hypervisor OS to overlap CPU demand... This same idea can be scaled up, the beauty of not pinning CPU cores is that when your cores are fluctuating in usage, your load can be balanced nicely. So in reality, only high performance VMs with specific scenarios are likely to benefit from CPU pinning. I'm still on board with the idea because I am manually applying this solution to a few customer's who are in fact seeing a performance benefit from it. But it must be understood that this is a solution for specific problems.
Thanks,


Tmanok
 
I am evaluating Proxmox (first time) on EPYC, and trying to understand VM pinning and NUMA topics. My understanding is that there are different recommendations (by AMD etc.) depending on VM use. My CPU is 4x4 multi-chip, and I will have small VMs i.e. each VM will fit into one CCD (<8 vCPUs). Also I will use PCIe passthrough, and similarly each CCD has its own PCIe root complex i.e. my GPU is on one root complex and NICs are on other root complex. The performance difference might be small or might be meaningful only in a limited set of scenarios, but I think it would be quite nice if a VM could be pinned from the GUI to a set of cores (= or > # of vCPUs) to fully utilize L3 cache and I/O locality in these multi-chips.

Mete
 
  • Like
Reactions: Tmanok
I have an 8 core proxmox server and running a windows gaming VM. I want to dedicate 6 cores to the gaming VM exclusively. I understand I can pin the VM to say cores 3-8 with taskset.

My questions are: Does pinning prevent other process from running on those cores or only limit the pined VM to using 3-8? To reserve 3-8 for only the gaming VM do I then have to pin everything else to 1-2? If so how do you pin the proxmox host?
 
I have an 8 core proxmox server and running a windows gaming VM. I want to dedicate 6 cores to the gaming VM exclusively. I understand I can pin the VM to say cores 3-8 with taskset.

My questions are: Does pinning prevent other process from running on those cores or only limit the pined VM to using 3-8? To reserve 3-8 for only the gaming VM do I then have to pin everything else to 1-2? If so how do you pin the proxmox host?
Hi Pharpe,

Please read my detailed post above which answers your questions. TL;DR No it doesn't prevent other processes from using the pinned cores and no you likely can't prevent that without having computer science level knowledge. We're asking the Proxmox team to consider making this a feature, maybe an expert software engineer will swing by and drop us some script or inform us of a kernel level feature that we're missing.

Cheers,


Tmanok
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!