I have a 4 core VM, with 2 all functions passthrough PCIe NICs. I set affinity to 0-7, but I still see the process on other cores. Does affinity require a certain condition, PCI passthrough prevents something ?
Mete
Mete
taskset -p <pid of the VM>
yield?As far as I am aware, CPU core pinning outside the VM should not interfere with PCI passthrough in any way.Does affinity require a certain condition, PCI passthrough prevents something ?
The setting "0-7" means you are pinning your VM to 8 CPU cores, while your VM only has 4 available. Are you aware of that? What are you using to tell what CPUs your VM is running on?
Additionally, posting your VM config might be helpful. What mask doestaskset -p <pid of the VM>
yield?
As far as I am aware, CPU core pinning outside the VM should not interfere with PCI passthrough in any way.
root@centrix:~# lstopo
Machine (63GB total)
Package L#0
NUMANode L#0 (P#0 63GB)
L3 L#0 (24MB)
L2 L#0 (2048KB) + L1d L#0 (48KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#1)
L2 L#1 (2048KB) + L1d L#1 (48KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#3)
L2 L#2 (2048KB) + L1d L#2 (48KB) + L1i L#2 (32KB) + Core L#2
PU L#4 (P#4)
PU L#5 (P#5)
L2 L#3 (2048KB) + L1d L#3 (48KB) + L1i L#3 (32KB) + Core L#3
PU L#6 (P#6)
PU L#7 (P#7)
L2 L#4 (2048KB) + L1d L#4 (48KB) + L1i L#4 (32KB) + Core L#4
PU L#8 (P#8)
PU L#9 (P#9)
L2 L#5 (2048KB) + L1d L#5 (48KB) + L1i L#5 (32KB) + Core L#5
PU L#10 (P#10)
PU L#11 (P#11)
L2 L#6 (4096KB)
L1d L#6 (32KB) + L1i L#6 (64KB) + Core L#6 + PU L#12 (P#12)
L1d L#7 (32KB) + L1i L#7 (64KB) + Core L#7 + PU L#13 (P#13)
L1d L#8 (32KB) + L1i L#8 (64KB) + Core L#8 + PU L#14 (P#14)
L1d L#9 (32KB) + L1i L#9 (64KB) + Core L#9 + PU L#15 (P#15)
L2 L#7 (4096KB)
L1d L#10 (32KB) + L1i L#10 (64KB) + Core L#10 + PU L#16 (P#16)
L1d L#11 (32KB) + L1i L#11 (64KB) + Core L#11 + PU L#17 (P#17)
L1d L#12 (32KB) + L1i L#12 (64KB) + Core L#12 + PU L#18 (P#18)
L1d L#13 (32KB) + L1i L#13 (64KB) + Core L#13 + PU L#19 (P#19)
HostBridge
PCIBridge
PCI 01:00.0 (Ethernet)
Net "enp1s0"
PCI 00:02.0 (VGA)
PCIBridge
PCI 02:00.0 (NVMExp)
Block(Disk) "nvme0n1"
PCIBridge
PCI 03:00.0 (NVMExp)
PCIBridge
PCI 04:00.0 (SAS)
root@centrix:~# lstopo
Machine (63GB total)
Package L#0
NUMANode L#0 (P#0 63GB)
L3 L#0 (24MB)
L2 L#0 (2048KB) + L1d L#0 (48KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#1)
L2 L#1 (2048KB) + L1d L#1 (48KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#3)
L2 L#2 (2048KB) + L1d L#2 (48KB) + L1i L#2 (32KB) + Core L#2
PU L#4 (P#4)
PU L#5 (P#5)
L2 L#3 (2048KB) + L1d L#3 (48KB) + L1i L#3 (32KB) + Core L#3
PU L#6 (P#6)
PU L#7 (P#7)
L2 L#4 (2048KB) + L1d L#4 (48KB) + L1i L#4 (32KB) + Core L#4
PU L#8 (P#8)
PU L#9 (P#9)
L2 L#5 (2048KB) + L1d L#5 (48KB) + L1i L#5 (32KB) + Core L#5
PU L#10 (P#10)
PU L#11 (P#11)
L2 L#6 (4096KB)
L1d L#6 (32KB) + L1i L#6 (64KB) + Core L#6 + PU L#12 (P#12)
L1d L#7 (32KB) + L1i L#7 (64KB) + Core L#7 + PU L#13 (P#13)
L1d L#8 (32KB) + L1i L#8 (64KB) + Core L#8 + PU L#14 (P#14)
L1d L#9 (32KB) + L1i L#9 (64KB) + Core L#9 + PU L#15 (P#15)
L2 L#7 (4096KB)
L1d L#10 (32KB) + L1i L#10 (64KB) + Core L#10 + PU L#16 (P#16)
L1d L#11 (32KB) + L1i L#11 (64KB) + Core L#11 + PU L#17 (P#17)
L1d L#12 (32KB) + L1i L#12 (64KB) + Core L#12 + PU L#18 (P#18)
L1d L#13 (32KB) + L1i L#13 (64KB) + Core L#13 + PU L#19 (P#19)
HostBridge
PCIBridge
PCI 01:00.0 (Ethernet)
Net "enp1s0"
PCI 00:02.0 (VGA)
PCIBridge
PCI 02:00.0 (NVMExp)
Block(Disk) "nvme0n1"
PCIBridge
PCI 03:00.0 (NVMExp)
PCIBridge
PCI 04:00.0 (SAS)
@nominal Please see Dunuin's answer. Running 7.3+, you should see it underI can't find this setting anywhere within the management UI for my home cluster, does it need to be enabled somewhere?
Hardware
> Processors
> textfield CPU affinity
. Please make sure the Advanced checkbox is selected.man cpuset
chapter FORMATSList format
The List Format for cpus and mems is a comma-separated list of CPU or memory-node numbers and ranges of numbers, in ASCII decimal.
Examples of the List Format:
0-4,9 # bits 0, 1, 2, 3, 4, and 9 set
0-2,7,12-14 # bits 0, 1, 2, 7, 12, 13, and 14 set
taskset -p <pid of your VM>
F000
.