Pinning vCPU threads to real pCPU cores and preventing host OS to use those pCores

McGhost

New Member
Oct 11, 2025
1
0
1
Hello,

I would like to pin VM vCPU threads to physical CPU threads.
I mean when you defice CPU affinities in VM / Hardware / Processor / CPU affinitiy: n-m.
E.g If I set CPU affinity to 2,5,9-11, I would like to have option that instead of current functionality that VM vCPU threads uses above mentioned pCores based on OS scheduler (in one sense random manner): so that vCPU thread 0 is pinned to pCore 2, vCPU thread 1 is pinned to pCore 5, vCPU threads 2-4 are pinned to pCores 9-11 and they use just those cores and they are not dynamically scheduled to other pCores.
I think part of this can be done usint post-init hook: read affinity list from VM config file, then capture current thread IDs using some OS tool (like ps), but I have understood that this does not ensure that they use above mentioned pCores full VM lifetime, but instead host OS scheduler might change them to some other pCores dynamically.
Is this correct?
If yes, how to accomplish above? If not possible currently with Proxmox, is it possible that such feature will be added later?

How to make host OS scheduler to keep above mentioned pCores free from other processed/threads? I assume that there is some kernel boot options, but are they usable in this case?

I think there's at least these kernel options are related to this. But is isolcpus alone enough?

isolcpus= [KNL,SMP] Isolate CPUs from the general scheduler.
The argument is a cpu list, as described above.

This option can be used to specify one or more CPUs
to isolate from the general SMP balancing and scheduling
algorithms. You can move a process onto or off an
"isolated" CPU via the CPU affinity syscalls or cpuset.
<cpu number> begins at 0 and the maximum value is
"number of CPUs in system - 1".

This option is the preferred way to isolate CPUs. The
alternative -- manually setting the CPU mask of all
tasks in the system -- can cause problems and
suboptimal load balancer performance.

nohz_full= [KNL,BOOT]
The argument is a cpu list, as described above.
In kernels built with CONFIG_NO_HZ_FULL=y, set
the specified list of CPUs whose tick will be stopped
whenever possible. The boot CPU will be forced outside
the range to maintain the timekeeping. Any CPUs
in this list will have their RCU callbacks offloaded,
just as if they had also been called out in the
rcu_nocbs= boot parameter.


rcu_nocbs= [KNL]
The argument is a cpu list, as described above.

In kernels built with CONFIG_RCU_NOCB_CPU=y, set
the specified list of CPUs to be no-callback CPUs.
Invocation of these CPUs' RCU callbacks will
be offloaded to "rcuox/N" kthreads created for
that purpose, where "x" is "b" for RCU-bh, "p"
for RCU-preempt, and "s" for RCU-sched, and "N"
is the CPU number. This reduces OS jitter on the
offloaded CPUs, which can be useful for HPC and
real-time workloads. It can also improve energy
efficiency for asymmetric multiprocessors.
 
Last edited: