cpu affinity in 7.3

mete

New Member
Apr 4, 2022
12
3
3
Switzerland
I have a 4 core VM, with 2 all functions passthrough PCIe NICs. I set affinity to 0-7, but I still see the process on other cores. Does affinity require a certain condition, PCI passthrough prevents something ?

Mete
 
The setting "0-7" means you are pinning your VM to 8 CPU cores, while your VM only has 4 available. Are you aware of that? What are you using to tell what CPUs your VM is running on?
Additionally, posting your VM config might be helpful. What mask does taskset -p <pid of the VM> yield?

Does affinity require a certain condition, PCI passthrough prevents something ?
As far as I am aware, CPU core pinning outside the VM should not interfere with PCI passthrough in any way.
 
Yes I am aware, I think the affinity in this case is a restriction so they do not need to be same, am I wrong ?

I think I was looking at wrong. I created another VM to be able to test easily (still 4 core, but pinned to 0-7), and run a stress on it, and I can see only the cores I restricted it to (0-7) reaches 100%, so I think there is no problem.

taskset returns affinity mask ff for the pid (/run/qemu-server/vmid.pid), so that seems to be OK.

I was pinning the VM manually before, happy to see this on 7.3.
 
Dont forget to delete your browser cache with CTRL+F5 after PVE upgrades. Its then in the VMs hardware tab when editing your CPU.
 
The setting "0-7" means you are pinning your VM to 8 CPU cores, while your VM only has 4 available. Are you aware of that? What are you using to tell what CPUs your VM is running on?
Additionally, posting your VM config might be helpful. What mask does taskset -p <pid of the VM> yield?


As far as I am aware, CPU core pinning outside the VM should not interfere with PCI passthrough in any way.

Not to thread hijack but related question. "CPU Affinity" under Processors setting of the VM does not seem to allow me to explicitly select which CPU cores I want mapped to a VM... Can you add this feature please?

Rationale: Intel's latest Raptor Lake/Alder Lake processors have "performance" and "efficiency" cores in one CPU. Assume you have a VM that may require 100% CPU utilization but you wish to have **lower power consumption** - by force mapping the "efficiency" cores to the 100% CPU VM in theory the total watts power draw by the system should be much less than random mapping (where I may allocate a "performance" core inadvertedly and those cores can use more TDP to boost performance and draw more power from the wall which is undesirable for my high-efficiency low power setup)

This is output from my Rocket Lake i5-13600K - how would I map 4 efficiency cores to a VM explictly? would it be '12,13,14,15' affinity?

Code:
root@centrix:~# lstopo
Machine (63GB total)
  Package L#0
    NUMANode L#0 (P#0 63GB)
    L3 L#0 (24MB)
      L2 L#0 (2048KB) + L1d L#0 (48KB) + L1i L#0 (32KB) + Core L#0
        PU L#0 (P#0)
        PU L#1 (P#1)
      L2 L#1 (2048KB) + L1d L#1 (48KB) + L1i L#1 (32KB) + Core L#1
        PU L#2 (P#2)
        PU L#3 (P#3)
      L2 L#2 (2048KB) + L1d L#2 (48KB) + L1i L#2 (32KB) + Core L#2
        PU L#4 (P#4)
        PU L#5 (P#5)
      L2 L#3 (2048KB) + L1d L#3 (48KB) + L1i L#3 (32KB) + Core L#3
        PU L#6 (P#6)
        PU L#7 (P#7)
      L2 L#4 (2048KB) + L1d L#4 (48KB) + L1i L#4 (32KB) + Core L#4
        PU L#8 (P#8)
        PU L#9 (P#9)
      L2 L#5 (2048KB) + L1d L#5 (48KB) + L1i L#5 (32KB) + Core L#5
        PU L#10 (P#10)
        PU L#11 (P#11)
      L2 L#6 (4096KB)
        L1d L#6 (32KB) + L1i L#6 (64KB) + Core L#6 + PU L#12 (P#12)
        L1d L#7 (32KB) + L1i L#7 (64KB) + Core L#7 + PU L#13 (P#13)
        L1d L#8 (32KB) + L1i L#8 (64KB) + Core L#8 + PU L#14 (P#14)
        L1d L#9 (32KB) + L1i L#9 (64KB) + Core L#9 + PU L#15 (P#15)
      L2 L#7 (4096KB)
        L1d L#10 (32KB) + L1i L#10 (64KB) + Core L#10 + PU L#16 (P#16)
        L1d L#11 (32KB) + L1i L#11 (64KB) + Core L#11 + PU L#17 (P#17)
        L1d L#12 (32KB) + L1i L#12 (64KB) + Core L#12 + PU L#18 (P#18)
        L1d L#13 (32KB) + L1i L#13 (64KB) + Core L#13 + PU L#19 (P#19)
  HostBridge
    PCIBridge
      PCI 01:00.0 (Ethernet)
        Net "enp1s0"
    PCI 00:02.0 (VGA)
    PCIBridge
      PCI 02:00.0 (NVMExp)
        Block(Disk) "nvme0n1"
    PCIBridge
      PCI 03:00.0 (NVMExp)
    PCIBridge
      PCI 04:00.0 (SAS)
root@centrix:~# lstopo
Machine (63GB total)
  Package L#0
    NUMANode L#0 (P#0 63GB)
    L3 L#0 (24MB)
      L2 L#0 (2048KB) + L1d L#0 (48KB) + L1i L#0 (32KB) + Core L#0
        PU L#0 (P#0)
        PU L#1 (P#1)
      L2 L#1 (2048KB) + L1d L#1 (48KB) + L1i L#1 (32KB) + Core L#1
        PU L#2 (P#2)
        PU L#3 (P#3)
      L2 L#2 (2048KB) + L1d L#2 (48KB) + L1i L#2 (32KB) + Core L#2
        PU L#4 (P#4)
        PU L#5 (P#5)
      L2 L#3 (2048KB) + L1d L#3 (48KB) + L1i L#3 (32KB) + Core L#3
        PU L#6 (P#6)
        PU L#7 (P#7)
      L2 L#4 (2048KB) + L1d L#4 (48KB) + L1i L#4 (32KB) + Core L#4
        PU L#8 (P#8)
        PU L#9 (P#9)
      L2 L#5 (2048KB) + L1d L#5 (48KB) + L1i L#5 (32KB) + Core L#5
        PU L#10 (P#10)
        PU L#11 (P#11)
      L2 L#6 (4096KB)
        L1d L#6 (32KB) + L1i L#6 (64KB) + Core L#6 + PU L#12 (P#12)
        L1d L#7 (32KB) + L1i L#7 (64KB) + Core L#7 + PU L#13 (P#13)
        L1d L#8 (32KB) + L1i L#8 (64KB) + Core L#8 + PU L#14 (P#14)
        L1d L#9 (32KB) + L1i L#9 (64KB) + Core L#9 + PU L#15 (P#15)
      L2 L#7 (4096KB)
        L1d L#10 (32KB) + L1i L#10 (64KB) + Core L#10 + PU L#16 (P#16)
        L1d L#11 (32KB) + L1i L#11 (64KB) + Core L#11 + PU L#17 (P#17)
        L1d L#12 (32KB) + L1i L#12 (64KB) + Core L#12 + PU L#18 (P#18)
        L1d L#13 (32KB) + L1i L#13 (64KB) + Core L#13 + PU L#19 (P#19)
  HostBridge
    PCIBridge
      PCI 01:00.0 (Ethernet)
        Net "enp1s0"
    PCI 00:02.0 (VGA)
    PCIBridge
      PCI 02:00.0 (NVMExp)
        Block(Disk) "nvme0n1"
    PCIBridge
      PCI 03:00.0 (NVMExp)
    PCIBridge
      PCI 04:00.0 (SAS)
 
Last edited:
I dont know the CPU but spec says 6 performance (I guess 2x threads) and 8 power efficient cores. So 0-11 must be performance, 12-19 must be power efficient. If you specify something on 12-19, it should work. It can be a list like 12-15 or like you wrote 12,13,14,15.
 
I can't find this setting anywhere within the management UI for my home cluster, does it need to be enabled somewhere?
@nominal Please see Dunuin's answer. Running 7.3+, you should see it under Hardware > Processors > textfield CPU affinity. Please make sure the Advanced checkbox is selected.

I would also agree with @mete here. As far as I am aware, the numbering for taskset in relation to the CPU cores should not change, and judging from the ouput you posted, cores 12-19 should be the efficiency cores. The textfield takes the cpuset list-format, so a range, stating every CPU core separately and both combined are valid.

From man cpuset chapter FORMATS
List format
The List Format for cpus and mems is a comma-separated list of CPU or memory-node numbers and ranges of numbers, in ASCII decimal.

Examples of the List Format:

0-4,9 # bits 0, 1, 2, 3, 4, and 9 set
0-2,7,12-14 # bits 0, 1, 2, 7, 12, 13, and 14 set
 
Last edited:
thank you for help confirming :)
I'll try CPU affinity settings to "12,13,14,15" and watch htop on proxmox to see if only those cores are maxed out and keep an eye on power consumption too
 
You can also look at the cpu affinity mask as displayed by taskset: taskset -p <pid of your VM>

If I am not mistaken, the mask for your configuration should be displayed as F000.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!