Hard-set streams for LXC container

anaks

Member
Oct 11, 2021
11
1
8
40
Good day!
I have softaware with licensing system with binding to processor threads.
On version 6.4 I used this construction
lxc.cgroup.cpuset.cpus = 0-6
in configuration file container
/etc/pve/lxc/100.conf
And I got that the container always worked on the same threads. But in version 7, this stopped working - when setting this parameter, the container gets access to all threads available on the machine.
 
hi,

change lxc.cgroup.cpuset.cpus with lxc.cgroup2.cpuset.cpus then reboot the container, you should see the correct cpus in pct cpusets afterwards
 
I've tried, but it doesn't work

104.conf

arch: amd64
cores: 10
hostname: test1
memory: 22528
net0: name=eth0,bridge=vmbr0,hwaddr=FA:0C:32:24:67:C5,ip=dhcp,tag=8,type=veth
onboot: 1
ostype: debian
rootfs: NVME:104/vm-104-disk-0.raw,size=50G
swap: 0
unprivileged: 1
lxc.cgroup2.cpuset.cpus: 0-9

pct cpusets
root@srv-virt:/etc/pve/lxc# pct cpusets
-------------------------------------------------------------------------------------------
104: 2 6 9 10 13 16 21 22 23 25
 
works here. did you restart the container with pct stop 104 && pct start 104?
 
Thank you! When I did pct stop 104 && pct start 104 in console - it's worked. Before that, I tried to turn it off and on from the web interface.
 
great!

Before that, I tried to turn it off and on from the web interface.
interface can behave differently with different shutdown options, if you want all options to be applied you should do a "Reboot" or "Stop" then "Start"

please mark the thread as [SOLVED] so others with the same problem can know what to expect :)
 
But...

root@srv-virt:~# pct stop 104 && pct start 104
root@srv-virt:~# pct cpuset
-------------------------------------------------------------------------------------------
104: 0 1 2 3 4 5 6 7 8 9
108: 11 14 15 16 21 30
124: 0 2 9 11 12 14 16 19 21 24 25 26
125: 2 4 26 27
200: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
201: 0 6 9 13 25 27
203: 17 20
210: 4 7 19 24
214: 0 4 6 26
227: 7 15 17 19 20 29 30 31
228: 3 10 11 12 13 14 27 28
229: 1 18
230: 22 23
231: 5 8
232: 1 3 5 7 8 15 18 22 23 24 28 29
233: 1 17 20 29 30 31
234: 3 5 8 10 12 18
-------------------------------------------------------------------------------------------

after 5 seconds
root@srv-virt:~# pct cpuset
-------------------------------------------------------------------------------------------
104: 2 6 9 10 13 16 21 22 23 25
108: 11 14 15 16 21 30
124: 0 2 9 11 12 14 16 19 21 24 25 26
125: 2 4 26 27
200: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
201: 0 6 9 13 25 27
203: 17 20
210: 4 7 19 24
214: 0 4 6 26
227: 7 15 17 19 20 29 30 31
228: 3 10 11 12 13 14 27 28
229: 1 18
230: 22 23
231: 5 8
232: 1 3 5 7 8 15 18 22 23 24 28 29
233: 1 17 20 29 30 31
234: 3 5 8 10 12 18
-------------------------------------------------------------------------------------------
root@srv-virt:~#
 
I confirm this problem.
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve) pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488) pve-kernel-helper: 7.1-2 pve-kernel-5.11: 7.0-8 pve-kernel-5.11.22-5-pve: 5.11.22-10 pve-kernel-5.11.22-4-pve: 5.11.22-9 ceph-fuse: 15.2.14-pve1 corosync: 3.1.5-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve1 libproxmox-acme-perl: 1.3.0 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.0-5 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-10 libpve-guest-common-perl: 4.0-2 libpve-http-server-perl: 4.0-3 libpve-storage-perl: 7.0-12 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-4 lxcfs: 4.0.8-pve2 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.0.11-1 proxmox-backup-file-restore: 2.0.11-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.3-6 pve-cluster: 7.0-3 pve-container: 4.0-10 pve-docs: 7.0-5 pve-edk2-firmware: 3.20210831-1 pve-firewall: 4.2-3 pve-firmware: 3.3-2 pve-ha-manager: 3.3-1 pve-i18n: 2.5-1 pve-qemu-kvm: 6.0.0-4 pve-xtermjs: 4.12.0-1 qemu-server: 7.0-16 smartmontools: 7.2-1 spiceterm: 3.2-2 vncterm: 1.7-1 zfsutils-linux: 2.0.5-pve1
cat /etc/pve/nodes/srv/lxc/109.conf arch: amd64 cores: 8 features: nesting=1 hostname: nikolya-test memory: 1024 net0: name=eth0,bridge=vmbr0,hwaddr=8A:90:0F:B8:CE:2E,ip=dhcp,type=veth ostype: debian rootfs: HDD1:109/vm-109-disk-0.raw,size=8G swap: 0 unprivileged: 1 lxc.cgroup2.cpuset.cpus: 1-8
root@srv:~# pct cpu ------------------------------------------- 109: 4 5 7 8 9 10 11 12 206: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 207: 2 3 6 13 14 15 208: 0 1 -------------------------------------------
 
after 5 seconds
root@srv-virt:~# pct cpuset
-------------------------------------------------------------------------------------------
104: 2 6 9 10 13 16 21 22 23 25
ah. it seems the other containers took over those cores, i can reproduce that here as well, seems to be a bug. will look into it :)
 
  • Like
Reactions: anaks and NIKOLYA
  • Like
Reactions: NIKOLYA