LXCFS br0ke / cgroup limit

Aug 28, 2017
66
12
48
44
Hi there,

we're actually running a four node Cluster with about 250 lxc containers on each node (evenly distributed). Primary Storage for almost all containers (except 4) is on the integrated ceph within proxmox.

Kernel Version

Linux 5.3.13-1-pve #1 SMP PVE 5.3.13-1 (Thu, 05 Dec 2019 07:18:14 +0100)
PVE Manager Version

pve-manager/6.1-3/37248ce6


We've had 3 Outages within the last week, all due to lxcfs fubar*ing up:

Apr 27 03:10:16 lxc-prox1 kernel: [741590.180559] cgroup: fork rejected by pids controller in /system.slice/lxcfs.service
Apr 27 03:10:16 lxc-prox1 lxcfs[1771]: fuse: error creating thread: Resource temporarily unavailable
Apr 27 03:10:18 lxc-prox1 lxcfs[1771]: bindings.c: 2473: recv_creds: Timed out waiting for scm_cred: No such file or directory

Restarting lxcfs to properly shutdown running containers (now zombie without a working /proc) and reboot the Cluster node solved the problems, but has its painpoints...

Are we hitting any limit here?
Googling around brought "https://www.suse.com/support/kb/doc/?id=000019044" to my attention that suggests to add a higher/unlimited Tasks Setting.

Code:
[Service]
TasksMax=MAX_TASKS|infinity
 
can you please include full pveversion -v output? anything special about the containers? could you post a sample config? thanks!
 
Hi fabian,

i ran an update this night - forgot to get the output prior to the update

root@lxc-prox4:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Containers are running normal centos, sample container config:
Code:
root@lxc-prox4:/etc/pve/nodes/lxc-prox4/lxc# cat 313.conf
arch: amd64
cmode: shell
console: 0
cores: 2
hookscript: local:snippets/lxc-route-hookscript.sh
hostname: as2-ce
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=0,hwaddr=52:59:F3:EE:60:1A,type=veth
onboot: 1
ostype: centos
rootfs: vservers:vm-313-disk-0,mountoptions=noatime,size=17G
swap: 1024
tty: 0
unprivileged: 1
 
yes, you can try increasing TasksMax , either with an override file (e.g., by calling systemctl edit lxcfs) or globally for all services via systemd-system.conf . if the problem happens again, you can check whether this limit was hit before rebooting by looking at /sys/fs/cgroup/pids/system.slice/lxcfs.service/pids.events - it will tell you the number of times a fork or clone syscall failed because it would go over the configured limit.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!