In the bugzilla issue they didn't fix anything, just closed it and provided a workaround, which in my view is not optimal. Not sure where the change should go, but there should be a /etc/default/lxcfs with a LXCFS_OPTS= var to able to specify whether you want this enabled or not.
We eventually found what was causing the leak and it was not in proxmox. It was a bug in a library (nss-softokn) in the Centos7 CTs, which is fixed by upgrading that library to a newer version, see the relevant commit:
Bug 1603801 [patch] Avoid dcache pollution from sdb_measureAccess()...
The journal size is pretty small tbh:
root@pmxc-12:~# journalctl --disk-usage
Archived and active journals take up 24.0M in the file system.
By the way, the OOM killer is never triggered because the CT gets almost up to its mem limit. 97% or so.
More information, issues on another CT for which the host we upgraded.
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
We are still puzzled by this issue and we haven't found the issue yet.
There have been no recent changes to that server that would explain that kind of memory usage increase.
Other things we have observed:
- Running kernel 5.4.106-1-pve and reverting packages lxc-pve and...
Eventually I removed the disk from the pool and then, with the remark from @avw, I could attach it as mirror:
scan: scrub repaired 0B in 07:22:19 with 0 errors on Sun Apr 11 07:46:20 2021
remove: Removal of vdev 1 copied 415G in 1h0m, completed on Mon Apr 19...
zabbix-proxy and salt-master. The image used for both is centos-7.
The zabbix-proxy one:
We have detected a strange (or not well-understood) behavior in the memory usage of, at least, two containers but we believe it's a generalized issue.
After a CT restart the memory usage keeps steadily growing until, after a couple days, it reaches around 96-99% of the total assigned...
Yes, I also tried that and didn't work:
root@pve01:~# zpool detach rpool wwn-0x5000c500b00df01a-part3
cannot detach wwn-0x5000c500b00df01a-part3: only applicable to mirror and replacing vdevs
Proxmox version and zfs versions:
root@pve01:~# zfs version