Hi
I'm using proxmox to host multiple LXC's and VM's, in order to get good gaming performance on my windows VM I am using cset and taskset to pin the windows cores to the last 8c/16t of my cpu. I have run into an issue with cset and lxc containers where if I define a slice for the windows VM LXC containers get access to all of the cores on the system slice and then apparmor runs everything in unconfined mode.
I run this command at boot as I don't mind losing 8c/16t as long as they get pinned:
cset set -c 0-31 -s machine.slice && cset shield --kthread on --cpu 8-15,24-31 && cset proc --move --fromset=root --toset=system --threads --kthread --force
This is the cset command that is run at launch with hookscripts:
cset proc --move --pid "$CPU_TASK" --toset=user --force
I have attached a few screenshots of what the lxc slices should look like compared to what they look like if I define my own cgroups.
Any help would be appreciated and thanks in advance.
I'm using proxmox to host multiple LXC's and VM's, in order to get good gaming performance on my windows VM I am using cset and taskset to pin the windows cores to the last 8c/16t of my cpu. I have run into an issue with cset and lxc containers where if I define a slice for the windows VM LXC containers get access to all of the cores on the system slice and then apparmor runs everything in unconfined mode.
I run this command at boot as I don't mind losing 8c/16t as long as they get pinned:
cset set -c 0-31 -s machine.slice && cset shield --kthread on --cpu 8-15,24-31 && cset proc --move --fromset=root --toset=system --threads --kthread --force
This is the cset command that is run at launch with hookscripts:
cset proc --move --pid "$CPU_TASK" --toset=user --force
I have attached a few screenshots of what the lxc slices should look like compared to what they look like if I define my own cgroups.
Any help would be appreciated and thanks in advance.