LXC Cores vs CPU Limit

tsumaru720

Well-Known Member
May 1, 2016
66
2
48
44
Hi Guys

I noticed recently that my LXC containers have a new option called "Cores". I previously had an LXC container configured with a CPU limit of 1 on a 4 core box and the end result was that the workload was spread across all cores but each one was loaded at about 25% - This resulted in overall 25% of the CPU being used but no one core was maxed out.

If I change this around to Cores 1 CPU Limit unlimited then the container maxes out 1 single core and indicentally the container only sees the 1 core its assigned.

This is absolutely fine

I'm just curious though as to when I'd want to assign whole actual cores to a container instead of letting things get scheduled by the host.
 
Simply do not set any limit (or what is the question?).

if i have a hex core host and I want to give an LXC container the processing power of 2 cores, I can either set "Cores" to 2 OR "CPU Limit" to 2, both have the effect of limiting the CPU power this container has, but they do it in different ways, I was just wondering what the pros/cons of each way are.
 
Setting cores make sense if you want to hide host details from the guest.

Else I would simply use cpulimit.
 
Or to give you a real reason (;-) ) - the container may be using software which is unaware of one limitation and only honors the other, and then for instance spawn 8 threads on an 8-core while being limited to the workload equivalent of 4, where 4 threads would end up performing better.
 
Or to give you a real reason (;-) ) - the container may be using software which is unaware of one limitation and only honors the other, and then for instance spawn 8 threads on an 8-core while being limited to the workload equivalent of 4, where 4 threads would end up performing better.

I still don't quite get the point. What is your general advice, to either use cpulimit or cores or both combined?
Does cores just (visually, e.g. via htop) hide the other cores but itself is not limiting the container to those cores, or does it actually do similar CPU scheduling as cpulimit does?
I prefer having the other cores hidden, so would like to limit containers with cores - but are there any disadvantages by only using cores without cpulimit?

(off-topic) And is there a possibility to show container specific load averages, not displaying the host's load on every LXC container? I know this has been asked a couple of times in this forum, but am not sure if there has been any progress/improvements lately (LXC on Proxmox VE 5.0).

Thanks.
 
cores pins the container to the number of cores you specify, and cpulimit limits the processes to the relative cpu time you specify

an example:

container with 4 cores
can use 4 cores of the host (the ones they are pinned on)

if you now use also cpulimit e.g. 3
the container is limited to the cpu time of 3 cores but it can happen on the 4 cores it is pinned to

is this helpful?
 
(off-topic) And is there a possibility to show container specific load averages, not displaying the host's load on every LXC container? I know this has been asked a couple of times in this forum, but am not sure if there has been any progress/improvements lately (LXC on Proxmox VE 5.0).

no, /proc/loadavg is not "virtualized" by lxcfs (yet). see https://github.com/lxc/lxcfs/issues/13 for an upstream discussion and reasons for why this is not trivial to implement.
 
Setting cores make sense if you want to hide host details from the guest.

Else I would simply use cpulimit.

So just quoting dietmar here in that he's suggested just using cpulimit unless your application cares about the number of cores etc.

Given this, whats the reason for using cores as the default cpu option when creating containers instead of cpulimits like it used to be?

Personally I still use cpulimit as for my particular use case my server is a low powered home server (so power costs and cooling have to be considered). If I set cpu cores to say, 1, and a container does some CPU heavy activity, this causes the CPU of the host (obviously) to max out the 1 core the container is pinned to; This results in the CPU entering a higher power state which costs more electricity and causes more heat. If I use cpulimit instead, then the workload is spread across all 4 cores at about 25% per core and so my server doesn't enter a higher power state and thus remains cooler :)

Multiply this by a number of containers and it means my server only ever clocks up when it needs to, rather than when any one container chomps CPU.

This is just my preference obviously - I'm aware that cpulimit has a small overhead in scheduling vs cores but hey ho
 
This will be solved eventually. We moved back to SolusVM due to openvz having the loadavg right. Hopefully soon it will be implemented as per the github link. When it does we we will probably move back to proxmox with lxc for our cpanel servers. I just love proxmox. so just Patience :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!