[SOLVED] CTs used memory keeps growing until full

tuxillo

Renowned Member
Mar 2, 2010
56
6
73
Hi,

We have detected a strange (or not well-understood) behavior in the memory usage of, at least, two containers but we believe it's a generalized issue.

After a CT restart the memory usage keeps steadily growing until, after a couple days, it reaches around 96-99% of the total assigned memory.
What's strange is that, according to the usual memory metric tools, there is no way the sum of the resident memory of all processes is what's being reported.
buf/cache and shared does not account for much of it either. We have not observed the OOM killer take any action yet.

Interestingly enough, when we drop all the caches by running in the host,

Code:
echo 3 > /proc/sys/vm/drop_caches

the available memory in the CT becomes readily available after a few seconds as you can see in the image below:

salt.jpg

Initially we were suspicious of the 6.2 -> 6.3 upgrade we performed a few days ago but, after booting one of the PVE nodes with the 6.2 kernel, we still can observe the same behavior, so we think this must be happening at least from 6.2 onwards.

Could it be that the cache memory for a particular cgroup is not being accounted in the cgroup's memory stats but still linked to it?
If so, how would anyone get meaninful information from the memory reporting tools within a CT?

Thanks,
 
hi,

* what is running in the containers?

* pct config CTID for the affected containers

* output of pveversion -v

After a CT restart the memory usage keeps steadily growing until, after a couple days, it reaches around 96-99% of the total assigned memory.
what do you see in htop? is there any process taking up a lot of memory (in the container)?
 
hi,

* what is running in the containers?

zabbix-proxy and salt-master. The image used for both is centos-7.

* pct config CTID for the affected containers
The zabbix-proxy one:

Code:
arch: amd64
cores: 2
hostname: zabbix-proxy.mysite
memory: 12288
nameserver: 172.1.11.254
net0: name=eth0,bridge=vmbr0,gw=172.1.11.254,hwaddr=xx:xx:xx:xx:xx,ip=172.1.11.100/24,tag=11,type=veth
onboot: 1
ostype: centos
rootfs: data:subvol-34101-disk-0,size=20G
swap: 0
lxc.apparmor.profile: lxc-container-default-with-nfs

The salt-master one:
Code:
arch: amd64
cores: 6
hostname: salt.mysite
memory: 16384
nameserver: 1721.11.254
net0: name=eth0,bridge=vmbr0,gw=172.1.11.254,hwaddr=xx:xx:xx:xx:xx,ip=172.1.11.225/24,tag=11,type=veth
onboot: 1
ostype: centos
rootfs: data:subvol-1603-disk-0,size=20G
swap: 0
* output of pveversion -v
In the node1:

Code:
root@node-01:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 8.0-2~bpo10+1
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.13-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

In the node2:

Code:
root@node-02:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 8.0-2~bpo10+1
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.13-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Take in account that, as I mentioned earlier, we booted one node (node01) with the pve 6.2 kernel, to test.
what do you see in htop? is there any process taking up a lot of memory (in the container)?
Yes, and nothing obvious shows up as I mentioned. And the sum of the memory used by all processes does not account for the reported memory usage.

Cheers,
 
Last edited:
We are still puzzled by this issue and we haven't found the issue yet.

More information:

px_ct_salt_BAD.png

There have been no recent changes to that server that would explain that kind of memory usage increase.

Other things we have observed:

- Running kernel 5.4.106-1-pve and reverting packages lxc-pve and lxcfs to pre-6.3 versions didn't make any difference.
- In /proc/meminfo in the host we see 'KReclaimable:', if we do a 'echo 3 > /proc/sys/vm/drop_caches' we observe a increase in the CT available memory but in no way that memory released was accounted as cache (as reported by free) inside the CT.

Any hint would be appreciated.

Thanks.
 
More information, issues on another CT for which the host we upgraded.

pmxc-12_upgraded.png

Detailed pveversion:

Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-4.15: 5.4-9
pve-kernel-4.13: 5.2-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 8.0-2~bpo10+1
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.13-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
hi,

thank you for the information and outputs. will try to reproduce this here and get back
 
Thanks @oguz, if you need any more information let me know. I'll keep investigating on my side but at this point I'm out of ideas.
 
Didn't read whole thread, but if the LXC keeps consuming memory until OOM killer runs, in my experience was due to systemd logging into RAM.
Do free -h before and after you run:
journalctl --vacuum-size=15M
and report back.
 
The journal size is pretty small tbh:

Code:
root@pmxc-12:~# journalctl --disk-usage
Archived and active journals take up 24.0M in the file system.

By the way, the OOM killer is never triggered because the CT gets almost up to its mem limit. 97% or so.
 
Hi all,

We eventually found what was causing the leak and it was not in proxmox. It was a bug in a library (nss-softokn) in the Centos7 CTs, which is fixed by upgrading that library to a newer version, see the relevant commit:

Code:
Bug 1603801 [patch] Avoid dcache pollution from sdb_measureAccess() r=mt

As implemented, when sdb_measureAccess() runs it creates up to 10,000 negative
dcache entries (cached nonexistent filenames).

There is no advantage to leaving these particular filenames in the cache; they
will never be searched again. Subsequent runs will run a new test with an
intentionally different set of filenames. This can have detrimental effects on
some systems; a massive negative dcache can lead to memory or performance
problems.

Since not all platforms have a problem with negative dcache entries, this patch
is limitted to those platforms that request it at compilie time (Linux is
current the only patch that does.)

Differential Revision: https-phabricator-services-mozilla.com-D59652

There are two parts in this bug report:

1. The nss-softokn library (version 3.28.x) having a bug which was causing 'dentry' slab cache pollution because it was creating and deleting a lot of files. We identified it by comparing the increase of 'dentry' objs in slabtop(1) related to the decrease in MemAvailable and the increase of KReclaimed within the CT. This post for Centos made us look at the library specifically: https-forum-centos-webpanel-com/index.php?topic=3901.0

2. The dcache memory pollution mentioned above was being reported as memory used by the LXC container but not as cache (is dentry cache even reported as cache at all?) so it was impossible to determine where the memory usage was coming from at first glance. I consider this a serious issue since any dcache pollution within a container can mess up the memory statistics and confuse reporting tools (i.e. monitoring).

@oguz , point 1 is solved for us but point 2 is still an issue, probably a linux kernel one.

I will mark this as "SOLVED" now. Thanks all.
 
  • Like
Reactions: Denis Kulikov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!