Suggestion: Option to consolidate Housekeeping-Tasks to Resource Subset like Xen does with Dom0

Ozymandias42

New Member
Oct 24, 2025
4
1
3
One of the last advantages of Xen over KVM in terms of virtualisation efficiency is that housekeeping tasks are only done on the CPUs assigned to Dom0, all others are perfectly idle.

This has certain advantages in terms of Power-Efficiency, VM-Exits and Reaction-Times for RT-Tasks and it can be replicated on KVM in various ways.
The easiest way of doing it would be to make use of the `systemd.slice(5)` mechanism with `systemd.resource-control(5)` to use cgroups for pinning the system tasks to a subset of CPUs.

Another would be to do this equivalent to the CPU-Affinity assignment in VMs via the `taskset(1)` utility.

These alone however would not free the other CPUs of kthreads as those are nailed to the root cgroup[1] of which all others are child-objects, same goes for clock-ticks.
kthreads could be removed by use of `cpuset` via the `cset(1)` util. Specifically `cset-shield(1)`

Alternatively the `isolcpus=<cpus>` linux boot-parameter could be used to keep the CPUs intended for the DomU equivalent free of processes, kthreads and ticks.
more or less equivalent to that would be the two boot parameters `nohz_full=<cpus>` and `rcu_nocbs=<cpus>`[2][3]

---

Removing kernel threads and other processes allows for VMs to become RT guests, given that no other VM runs on the same CPUs.
It also allows for the use of qemu's cpu-powermanagement feature `cpu-pm=on`, which allows the guest to trigger idle states of the CPUs at the cost of slower reaction from the host on them.

As most of the suggested tools and mechanisms are already present in a vanilla PVE install the suggestion would therefore be to maybe add a way to use them to the UI at some point or at least reference the option for possible manual adjustments to a setup in the docs.

References:
[1] https://www.cs.memphis.edu/~xgao1/paper/ccs19.pdf , Section 3.1
[2] https://jeremyeder.com/2013/11/15/nohz_fullgodmode/
[3] https://unix.stackexchange.com/ques...kernel-boot-parameters-nohz-full-and-isolcpus
https://manpages.debian.org/testing/cpuset/cset-shield.1.en.html
https://www.qemu.org/docs/master/system/or1k/cpu-features.html
 
Hi, @Ozymandias42
Would you share some reliable links?

Good thing you asked. I just assumed this to be the case because Xen's architecture implies the advantages described above via those mechanisms by default.

However this
https://monovm.com/blog/kvm-vs-xen/
and this
https://www.sciencedirect.com/science/article/abs/pii/S1383762120300035

seem to imply that Xen even compared to vanilla KVM has worse latencies and less efficient CPU scheduling. Surprising.
Guess the Linux kernel is just much better in resource management than Xen even so much that with the overhead of housekeeping tasks on all cores it is still more efficient.

On the other hand this does not mean that KVM can't be made even better with these.

There are multiple posts on reddit in the linuxgaming and vfio subreddits that proof a notable performance increase.
https://www.reddit.com/r/linux_gami..._tickless_nohz_full_kernel_and_cpu_isolation/
https://www.reddit.com/r/linux_gami...ave_stumbled_upon_the_ultimate_configuration/
https://forums.unraid.net/topic/138478-using-the-nohz_full-kernel-option-to-improve-vm-latency/

Also a performance tuning guide for low-latency VMs from Redhat
https://access.redhat.com/sites/def...01-perf-brief-low-latency-tuning-rhel7-v1.pdf
https://www.intel.com/content/www/u...guide-for-intel-xeon-processor-platforms.html , Section 3.2.7
 
Last edited: