cset failing (PVE7)

blackpaw

Renowned Member
Nov 1, 2013
302
22
83
Trying to shield my pinned cpus (0-7) with the following command:

cset shield --cpu=0-7

But I just get this error:

mount: /cpusets: none already mounted on /sys/fs/bpf. cset: **> mount of cpuset filesystem failed, do you have permission?

The CPU pinning cmd does work fine (taskset --cpu-list --all-tasks --pid 0-7 "$(< /run/qemu-server/100.pid)")

Any suggestions? thanks.

PVE 7.0-11, nosub repo.
 
Last edited:
PVE 7 switched to cgroups v2, so the bad news is that 'cset' doesn't work anymore.

But the good news is that systemd can do it natively now:
Code:
sudo systemctl set-property --runtime -- user.slice AllowedCPUs=0-7
sudo systemctl set-property --runtime -- system.slice AllowedCPUs=0-7
sudo systemctl set-property --runtime -- init.scope AllowedCPUs=0-7

Note that the mask is "allowed" now, not "shielded", so you need to invert it from your current 'cset' command. To reset, simply set the "AllowedCPUs" value to all available cores.

Also, just in general, I would not recommend pinning anything to core 0, as the kernel itself treats this one a little bit special and it's not possible to migrate everything off of it. Rather use the higher cores for the VM and leave the lower ones for the host.
 
  • Like
Reactions: cbugk
PVE 7 switched to cgroups v2, so the bad news is that 'cset' doesn't work anymore.

But the good news is that systemd can do it natively now:
Code:
sudo systemctl set-property --runtime -- user.slice AllowedCPUs=0-7
sudo systemctl set-property --runtime -- system.slice AllowedCPUs=0-7
sudo systemctl set-property --runtime -- init.scope AllowedCPUs=0-7

Note that the mask is "allowed" now, not "shielded", so you need to invert it from your current 'cset' command. To reset, simply set the "AllowedCPUs" value to all available cores.

Also, just in general, I would not recommend pinning anything to core 0, as the kernel itself treats this one a little bit special and it's not possible to migrate everything off of it. Rather use the higher cores for the VM and leave the lower ones for the host.


Thanks Stefan, that is good to know, and a cleaner looking interface.

Would the following be appropriate? (12 Core Ryzen)

Code:
systemctl set-property --runtime -- user.slice AllowedCPUs=4-11
systemctl set-property --runtime -- machine.slice AllowedCPUs=0-3
systemctl set-property --runtime -- system.slice AllowedCPUs=0-3


taskset --cpu-list  --all-tasks --pid 4-11 "$(< /run/qemu-server/100.pid)"

Thanks.
 
Would the following be appropriate? (12 Core Ryzen)
For best performance on Ryzen, I'd recommend aligning your core layout to the physical layout of the CPU cores - that is, your CCX and CCD configuration. Check that with 'lstopo'. For a 12 core chip that probably means either 2x6 or 4x3, meaning your layout should work in counts of 3. Setting "AllowedCPUs" to 0-3 includes 4 cores (0, 1, 2 and 3), which is misaligned, so I'd rather go with 0-2 or 0-5.

Also note that if you have SMP enabled, your 12 core chip will have 24 cores shown to the OS, and you need to assign them. Check 'nproc' and see what it reports.

If you already meant "6 cores with SMT", then I would suggest assigning cores based on SMT locality. i.e. rather go with 0-2,6-8 for the host and 3-5,9-11 for the pinned VM. Linux core numbers show the SMT match in the lower and upper halves, that is, on a 6 core 12 thread system, core numbers 0 and 6 are one physical core. On Windows, this is different, where 0 and 1 would share a pCPU.

The commands you posted would produce a different result than the 'cset' you had initially. The "machine.slice" is not used by PVE, so it won't have an effect. "user.slice" is used for user processes on the host, so for example your shell when you login. Depending on your use case, this *might* make sense to share with the VM as your posted command would do, but usually you'd put that on the host cores as well.

From my knowledge setting the values for "init.scope" as well can help migrate some system processes, but I'm not 100% sure on that, it might be fine to leave it off.
 
  • Like
Reactions: blackpaw
For best performance on Ryzen, I'd recommend aligning your core layout to the physical layout of the CPU cores - that is, your CCX and CCD configuration. Check that with 'lstopo'. For a 12 core chip that probably means either 2x6 or 4x3, meaning your layout should work in counts of 3. Setting "AllowedCPUs" to 0-3 includes 4 cores (0, 1, 2 and 3), which is misaligned, so I'd rather go with 0-2 or 0-5.

Also note that if you have SMP enabled, your 12 core chip will have 24 cores shown to the OS, and you need to assign them. Check 'nproc' and see what it reports.

If you already meant "6 cores with SMT", then I would suggest assigning cores based on SMT locality. i.e. rather go with 0-2,6-8 for the host and 3-5,9-11 for the pinned VM. Linux core numbers show the SMT match in the lower and upper halves, that is, on a 6 core 12 thread system, core numbers 0 and 6 are one physical core. On Windows, this is different, where 0 and 1 would share a pCPU.

The commands you posted would produce a different result than the 'cset' you had initially. The "machine.slice" is not used by PVE, so it won't have an effect. "user.slice" is used for user processes on the host, so for example your shell when you login. Depending on your use case, this *might* make sense to share with the VM as your posted command would do, but usually you'd put that on the host cores as well.

From my knowledge setting the values for "init.scope" as well can help migrate some system processes, but I'm not 100% sure on that, it might be fine to leave it off.

Ah, I see thanks. I misidentified my CPU, actual 6 Core with SMP in a 2x3 config.

Lots of fun tweaking time ahead :)
 
Ah, I see thanks. I misidentified my CPU, actual 6 Core with SMP in a 2x3 config.

Lots of fun tweaking time ahead :)
Did you get this working?
Was about to start using isolcpus but stumbled across cset, but found it doesn't work on 7.x

If your able to post your example, that would greatly help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!