I'm in the process of changing journald config to solve a problem of journald eating through SWAP.
I know that on some of my containers, journald stores logs in memory. After I cleaned logs on one of the containers, SWAP usage went down from 100% to basically 0%.
Now I have another container and this is what's reported from inside the container:
That's a single 2.4 GB log file stored in /run/log/journal, but that's a lot more than whan free -h reports in terms of memory and swap usage.
Where are the logs actually stored then?
I also don't know how to check if right now those journald are hogging memory or SWAP which could be utilized by other processes in the container.
Any help would be appreciated, along with some pointers toward what to google, because I have very liitle idea when it comes to that.
I know that on some of my containers, journald stores logs in memory. After I cleaned logs on one of the containers, SWAP usage went down from 100% to basically 0%.
Now I have another container and this is what's reported from inside the container:
Code:
root@container:~# journalctl --disk-usage
Archived and active journals take up 2.4G on disk.
root@container:~# free -h
total used free shared buff/cache available
Mem: 1.0G 221M 44M 714M 758M 802M
Swap: 3.0G 1.8G 1.2G
root@container:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop2 7.9G 3.3G 4.2G 44% /
none 492K 0 492K 0% /dev
tmpfs 32G 4.0K 32G 1% /dev/shm
tmpfs 32G 2.5G 29G 8% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
none 32G 0 32G 0% /run/shm
That's a single 2.4 GB log file stored in /run/log/journal, but that's a lot more than whan free -h reports in terms of memory and swap usage.
Where are the logs actually stored then?
I also don't know how to check if right now those journald are hogging memory or SWAP which could be utilized by other processes in the container.
Any help would be appreciated, along with some pointers toward what to google, because I have very liitle idea when it comes to that.
Code:
foo@host:~$ pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-1-pve: 4.10.17-18
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2