Cpu usage inside lxc container

denissi

Active Member
Apr 12, 2018
4
0
41
Hi, after update and reboot we have abnormal cpu usage inside lxc container.

If look at the host and inside the container using the top, then the cpu load is very different.

Maybe someone faced the same problem?

Thank you in advance!
 

Attachments

  • cpu.jpg
    cpu.jpg
    195.1 KB · Views: 91
Last edited:
After downgrade lxcfs 3.0.3-pve1 => lxcfs 3.0.2-2 and reboot the server, everything returned to normal.
 
By default this flag was not set, but the problem was already there, I tried to include this flag as described here:

This is now resolved. Whether to enable it default or not, will be decided later.

Willing users can enable this behaviour by editing:

/lib/systemd/system/lxcfs.service

and by adding the `-l` flag in ExecStart. Then restart service and containers.

But this did not solve the problem of loading cpu in the inside of the container.
 
But this did not solve the problem of loading cpu in the inside of the container.

I'm not sure I understood the problem exactly.

Are you seeing different CPU loads on host and container? Or is the container using too much CPU? What's the specific issue here? Also how did downgrading lxcfs solve it?
 
Are you seeing different CPU loads on host and container? yes

After the update, we see an abnormal load of cores inside the lxc container as viewed through top, but there is no load on the host if compared to the same cores as allocated to the container via pct cpuset.

The images show that in lxc the load of the first four cores is always 100%, on the host these are cores 4 5 16 17 19 22 27 31 and they are given only for this container.

After the rollback of versions, loading cores is almost similar to how the in host.
 

Attachments

  • cpu2.jpg
    cpu2.jpg
    559.1 KB · Views: 35
I am also seeing this issue. When I monitor my LXC containters via snmp, they show 100% cpu useage, but when i log into the container, the CPU is not 100% and it what i expect. I can reboot the containter and it fixes it sometimes.
 
I am also seeing this issue

* what is your `pveversion -v`?
* is the loadavg flag enabled for lxcfs? (-l)
* does the problem go away if you downgrade lxcfs?
* do you notice anything weird in the system logs while this happens?
 
proxmox-ve: 5.4-2 (running kernel: 4.15.18-18-pve)
pve-manager: 5.4-10 (running version: 5.4-10/9603c337)
pve-kernel-4.15: 5.4-6
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-11
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-53
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

I do not have the -l flag enabled.
I have not attempted any downgrade of lxcfs.
I looked through the syslog, but am not seeing any errors, but I have not spent a lot of time there, so I dont have a good sense of normal.
 
hi,

unfortunately, viewing resources from inside lxc containers will always be a problematic topic since they're in principle taken from the host.

the output you see depends on the tool you use and the distribution you're on (the sources these tools use to get the memory/cpu/etc. information differ, one might use syscalls and the other might read /proc/meminfo for example)

this commit[0] will apparently have an effect on this issue though, so you can follow it.

[0]: https://github.com/lxc/lxcfs/pull/290
 
Dears,

I am facing a similar issue.

If i run HTOP inside LXC host i can see cpu usage of the host itself, but the proccess inside LXC is not consuming this amount of resources.

It is not clear for me, do i have to update pve to solve this issue?

1688755705584.png

root@sda292:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
root@sda292:~#
 
I also see high CPU usage in LXC container when using top/htop. Is it expected to work better (see only LXC, not host CPU usage)? Thanks.
 
I've got the same problem running the latest Proxmox 8. The CPU usage on my Ubuntu 22.04 LXC contianer is similar to the one on the proxmox host without anyting running in the container. The CPU usage graphed in the Proxmox web interface seems to be the correct one.
 
Last edited:
Well, how can it show the "proper" data if you can't even easily decide what you should see?

Virtualization is all about lying.

It started by lying about memory and programs were given RAM that wasn't really there. So when they really wanted it, they got preempted and memory got paged to have it appear as if it had always been there.

The more virtualization, the more difficult the lies.

E.g. should the inside of a container know what resources are actually consumed?

In case of a IaaS abstraction container which has a VM outlook and believes it's alone on the machine, you'd say no, it should only see what it consumes by itself.

In case of a PaaS or SaaS container, which has a more cooperative and scale-out outlook, you'd be more inclined to say yes, because it tries to be more social and adapt its behavior or the resources activated.

Now what do you do if you run a PaaS container nested into a IaaS container, which is how Google and other hyperscalers are operating by default?

Control groups, name space capabilites were invented to enable this, but need to be understood and used and still can not solve all issues.

And then one of the things I liked best about OpenVZ is that it allows you to cheat better and put the sticks and the carrots for each container depending on what you know about its "character": If it was the greedy type, that tends to grab everything in its reach, you'd signal lower consumables than were available and yet sometimes let it take more, while only generating alerts, not killing it just yet because you had learned the size of its usual overreach. If it was the hesitant type, you'd actually signal higher resources so it would start using them e.g. to get better transaction performance out of a self-tuning database engine.

It doesn't get easier when you try to measure and tune power consumption and potentially try adjusting your choice between P-cores and E-cores or trying to control process clocks from the app to optimize energy consumption vs. SLAs.
 
Last edited:
Thank you for your comment, I understand the problem as you described it. However I think it's not so hard to sum CPU usage for processes run in LXC container and scale the result according assigned CPU cores count.
 
  • Like
Reactions: tomee

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!