LXCFS high CPU Usage with high number of CTs

ashehata

New Member
Jul 12, 2024
2
0
1
Hi Everyone,

I am encountering an issue with my LXC containers as LXCFS takes around 500% of the cpu as a usage and the execution of commands inside the containers is very slow.
this is the CPU usage of lxcfs
"root 2475805 630 0.0 757080 20768 ? Ssl Apr07 868518:31 /usr/bin/lxcfs /var/lib/lxcfs"

I was digging inside one of my lxc container and found that when any command tries to access any of the mounted dirs by lxcfs takes time and this leads to make the command be delayed in its execution.

My System Info

LXC Version : 5.0.2
LXCFS : "5.0.3"
Number of Running Containers : 100
VMs : 0
kernel version : 6.5.11-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-6 (2023-11-29T08:32Z) x86_64 GNU/Linux
pveversion :
pveversion --verbose
proxmox-ve: 8.1.0 (running kernel: 6.5.11-6-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1



Hardware Info :
CPU :
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel
Model name: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
BIOS Model name: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPU @ 2.6GHz
BIOS CPU family: 179
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
Stepping: 1
CPU(s) scaling MHz: 35%
CPU max MHz: 3500.0000
CPU min MHz: 1200.0000
BogoMIPS: 5200.10


Memory : 128 GB

All Containers hosted on RAID controller and its model is : : Broadcom / LSI MegaRAID SAS-3 3108 [Invader] and Disk are HDD and its size is 11TB





If there is any needed info please let me know

Thank You
 
Last edited:
> Number of Running Containers : 100
> VMs : 0

100 containers on 1 node? It sounds like you're overloading the server.

Post what kind of hardware you're running this on: CPU make/model/cores, RAM, HD/SSD/nvme setup, etc
 
> Number of Running Containers : 100
> VMs : 0

100 containers on 1 node? It sounds like you're overloading the server.

Post what kind of hardware you're running this on: CPU make/model/cores, RAM, HD/SSD/nvme setup, etc
I Updated the main post , please check it
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!