CPU pinning with containers is confusing me

Nov 17, 2017
16
0
6
59
I have a 16 core/32 thread ThreadRipper (AMD 1950X).

root@zen:/var/log# numactl --show
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
cpubind: 0 1
nodebind: 0 1
membind: 0 1
root@zen:/var/log# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 32126 MB
node 0 free: 8042 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 32221 MB
node 1 free: 11658 MB
node distances:
node 0 1
0: 10 16
1: 16 10


I am trying to use 16 containers, each with 1 core, to attempt 16x performance over a single threaded test. Containers have IDs 150-165 and names ct0..ct15.

I have a script that I think should apply CPU pinning:

#!/bin/bash

echo "Stopping containers"
pct stop 150
pct stop 151
pct stop 152
pct stop 153
pct stop 154
pct stop 155
pct stop 156
pct stop 157
pct stop 158
pct stop 159
pct stop 160
pct stop 161
pct stop 162
pct stop 163
pct stop 164
pct stop 165

echo "Launching pinned containers"
taskset 0x00000002 pct start 150
taskset 0x00000008 pct start 151
taskset 0x00000020 pct start 152
taskset 0x00000080 pct start 153
taskset 0x00000200 pct start 154
taskset 0x00000800 pct start 155
taskset 0x00002000 pct start 156
taskset 0x00008000 pct start 157
taskset 0x00020000 pct start 158
taskset 0x00080000 pct start 159
taskset 0x00200000 pct start 160
taskset 0x00800000 pct start 161
taskset 0x02000000 pct start 162
taskset 0x08000000 pct start 163
taskset 0x20000000 pct start 164
taskset 0x80000000 pct start 165

echo "Resulting state"
pct cpusets
numactl --hardware
numactl --show


When I run this I get:

root@zen:/var/log# /home/borris/ml/bindcpus.sh
Stopping containers
Launching pinned containers
Resulting state
-------------------------------------------------------------------------------------------
150: 8
151: 31
152: 3
153: 19
154: 10
155: 7
156: 11
157: 15
158: 2
159: 24
160: 30
161: 11
162: 3
163: 29
164: 8
165: 24
-------------------------------------------------------------------------------------------
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 32126 MB
node 0 free: 8205 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 32221 MB
node 1 free: 11453 MB
node distances:
node 0 1
0: 10 16
1: 16 10
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
cpubind: 0 1
nodebind: 0 1
membind: 0 1
root@zen:/var/log#


I really don't understand the output from the pct cpuset. Firstly, they don't look like the assignments I thought I was specifiying. Secondly, there are both even and odd numbered items in there and I never had odd bits masked. Most likely I'm not understanding something. But if someone could explain how the tasksets are related to the output of pct cpuset I would be very grateful.

Thanks

Borris
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!