[Q] LXC configuration wizard: what is "CPU limit" & "CPU unit"?

cmonty14

Well-Known Member
Mar 4, 2014
343
5
58
Hello!

What is the relation of the parameters "CPU limit" & "CPU unit" requested in the configuration wizard (see screenshot) when creating LXC container to CPU sockets / CPU cores?

THXAuswahl_071.png
 
There is no direct relation, because sockets/cores are just attributes describing the processor architecture,
and we simply use the host architecture for containers. CPU limit/units are CFS scheduler settings, which are now available
for containers and kvm VMs.

CPU units is a CFS scheduler feature described here (cpu.shares):

https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt

CPU limit refers to 'cfs_period_us' feature described here:

https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt

So CPU limit is basically the maximum number of cores a container can utilize.
 
Hi,thanks for the quick reply.

Are there any best practices for these settings?
Can you document these parameters in Wiki?
And how can I check how many CPU units are available on my host?
The command "vzcpucheck" is not working on PVE 4.

THX
 
Last edited:
Hello.

You haven't a fixed number of units (=shares), think of them like priorities:
For example if you have 3 containers (A, B and C) and give CT A 500 shares, CT B 250 shares and CT C 100 shares. This means A can get 5 times the CPU of container C and 2 times the CPU of container B.

There aren't any best practices as they're use case specific. Best is to set them for your needs. If you want all containers to get equal CPU time set them all to the same value (i.e. don't change anything as this is the default). If you have a CT which runs a important service maybe give it a bit more.

And we're also really happy if a user contributes to the PVE Wiki, would be great! Although we try to expand it also ourself, naturally.

EDIT:
btw. "vzcpucheck" doesn't work anymore as it was an OpenVZ tool, and so not part of PVE4 which switched from OpenVZ to LXC.
 
Last edited:
So CPU limit is basically the maximum number of cores a container can utilize.

But this is not the number of CPUs presented to the CT. I set cpu limit = 2 on a container and htop display all the CPUs of the host, all of them working every now and then. cat /proc/cpuinfo show all CPUs, not the number assign by cpu limit. That can be misleading about the actual power available to the CT.
 
That can be misleading about the actual power available to the CT.

You can simply use KVM if you want full virtualization. IMHO CPU limit work greats, and there is no real reason to limit the number of visible CPUs for containers.
 
You can simply use KVM if you want full virtualization. IMHO CPU limit work greats, and there is no real reason to limit the number of visible CPUs for containers.

I disagree Dietmar. When containers are sold as vps, there should be a clear understanding of cpu resources for the customers. If they see 8 cores after buying a single core vps, they will have no incentive to upgrade, regardless of the fact that cpulimit will limit their actual share of processor resources.

So if possible, I would like to request that LXC containers work the same way az OpenVZ containers: show only the number of processors that are available to a container.
 
  • Like
Reactions: Spazmic
Dietmar,

AFAIK when LXD will be stable, it'll, work this way, also with the possibility to limit IOPS (when it's finished). Are there any plans to integrate LXD and with it LXC 2.0 ?
 
AFAIK when LXD will be stable, it'll, work this way, also with the possibility to limit IOPS (when it's finished). Are there any plans to integrate LXD and with it LXC 2.0 ?

LXD uses LXC, so if LXD can do it, we can also do it. But so far I have not seen any patches.

Are there any plans to integrate LXD and with it LXC 2.0 ?

This question makes no sense to me.
 
We also use the same lxcfs.

In that case, could you enable the bind-mounts of the replacement /proc files to show actual cpu number (cpu limit) and swap size available to a container?

LXCFS is a simple userspace filesystem designed to work around some current limitations of the Linux kernel.

Specifically, it's providing two main things

  • A cgroupfs-like tree which is container aware and works using CGManager.
  • A set of files which can be bind-mounted over their /proc originals
    to provide CGroup-aware values
    .
 
Last edited:
  • Like
Reactions: dlasher
@gkovacs You can assign single cores to lxc containers with "lxc.cgroup.cpuset.cpus = 2,3" for cores 3 and 4 in /etc/pve/lxc/<ID>.conf and it will correctly display only 2 cpus in /proc/cpuinfo.

I recently stumbled across this issue as facter for puppet reported always all cores of the host system.

Are there any other options to limit the displayed amount of cpus in /proc/cpuinfo? Lxcfs seems only to use cpusets
 
@gkovacs You can assign single cores to lxc containers with "lxc.cgroup.cpuset.cpus = 2,3" for cores 3 and 4 in /etc/pve/lxc/<ID>.conf and it will correctly display only 2 cpus in /proc/cpuinfo.

I recently stumbled across this issue as facter for puppet reported always all cores of the host system.

Are there any other options to limit the displayed amount of cpus in /proc/cpuinfo? Lxcfs seems only to use cpusets

Thanks for this information. Problem is, assigning specific cores to containers would seriously impair fair scheduling on the host. As it's impossible to predict how much load one container would put on a specific core, it would be very hard to divide the available cores evenly between containers, most likely resulting in overloading some of the cores (while others would stay inactive).

OpenVZ did provide a translation between scheduled CPU time on the host and virtual cores in the guest, so if you had 2 cores assigned to a container, it would show precisely how much load they had when viewed in the guest. Hopefully someone will do that for lxcfs.
 
  • Like
Reactions: Spazmic
Any news of it ?
I totally agree with gkovacs, its essential for a lot of people/companies to show a fixed number of cores and not all the host cores.
A lot of Proxmox users are totally lost with the new improvements of Proxmox4. The main problems are :

- Core limiting in the container
- Container storage saved as .raw instead of folder
- Impossibility to downsize the storage size of a container on the fly
 
  • Like
Reactions: Spazmic and dlasher
Hello,

Same problems :

- Core limiting in the container
- Container storage saved as .raw instead of folder
- Impossibility to downsize the storage size of a container on the fl

Is there a bug tracker ?

Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!