In the bugzilla issue they didn't fix anything, just closed it and provided a workaround, which in my view is not optimal. Not sure where the change should go, but there should be a /etc/default/lxcfs with a LXCFS_OPTS= var to able to specify whether you want this enabled or not.
I understand...
Is there a solution for this?
If you're using ZFS and with aufs gone, your only choice seems to use fuse-overlayfs but then you can't backup the LXC or use ZFS replication, can you even migrate LXCs?
Hi all,
We eventually found what was causing the leak and it was not in proxmox. It was a bug in a library (nss-softokn) in the Centos7 CTs, which is fixed by upgrading that library to a newer version, see the relevant commit:
Bug 1603801 [patch] Avoid dcache pollution from sdb_measureAccess()...
The journal size is pretty small tbh:
root@pmxc-12:~# journalctl --disk-usage
Archived and active journals take up 24.0M in the file system.
By the way, the OOM killer is never triggered because the CT gets almost up to its mem limit. 97% or so.
More information, issues on another CT for which the host we upgraded.
Detailed pveversion:
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve...
We are still puzzled by this issue and we haven't found the issue yet.
More information:
There have been no recent changes to that server that would explain that kind of memory usage increase.
Other things we have observed:
- Running kernel 5.4.106-1-pve and reverting packages lxc-pve and...
Eventually I removed the disk from the pool and then, with the remark from @avw, I could attach it as mirror:
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 07:22:19 with 0 errors on Sun Apr 11 07:46:20 2021
remove: Removal of vdev 1 copied 415G in 1h0m, completed on Mon Apr 19...
zabbix-proxy and salt-master. The image used for both is centos-7.
The zabbix-proxy one:
arch: amd64
cores: 2
hostname: zabbix-proxy.mysite
memory: 12288
nameserver: 172.1.11.254
net0: name=eth0,bridge=vmbr0,gw=172.1.11.254,hwaddr=xx:xx:xx:xx:xx,ip=172.1.11.100/24,tag=11,type=veth
onboot: 1...
Hi,
We have detected a strange (or not well-understood) behavior in the memory usage of, at least, two containers but we believe it's a generalized issue.
After a CT restart the memory usage keeps steadily growing until, after a couple days, it reaches around 96-99% of the total assigned...
Yes, I also tried that and didn't work:
root@pve01:~# zpool detach rpool wwn-0x5000c500b00df01a-part3
cannot detach wwn-0x5000c500b00df01a-part3: only applicable to mirror and replacing vdevs
Proxmox version and zfs versions:
root@pve01:~# zfs version
zfs-0.8.5-pve1
zfs-kmod-0.8.5-pve1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.