Hello,
I'm having a issues regarding the ram usage on our hypervisors. It seems there is ram going I don't know where.
The hypervisor has 70GB of usable total ram. The current arc usage is 9GB according to
Here's the view of the ram of the hypervisor in the webui:
So the total usage should be closer to 24GB total (15+9) where is the other 38GB ?
The ZFS ARC is already limited to 10% min and 30% max of all memory on the server.
I've read a little bit and some people might be mentioning the slab might be taking it all for ZFS but even with the
Here are some more info:
Any ideas ?
I'm having a issues regarding the ram usage on our hypervisors. It seems there is ram going I don't know where.
The hypervisor has 70GB of usable total ram. The current arc usage is 9GB according to
arc_summary
. I've calculated the amount of ram used by each LXC containers on the host and the total came out to 15GB.Here's the view of the ram of the hypervisor in the webui:
So the total usage should be closer to 24GB total (15+9) where is the other 38GB ?
Code:
total used free shared buff/cache available
Mem: 70 62 5 0 3 7
Swap: 0 0 0
The ZFS ARC is already limited to 10% min and 30% max of all memory on the server.
I've read a little bit and some people might be mentioning the slab might be taking it all for ZFS but even with the
slub_nomerge
kernel parameter the problem persist.Here are some more info:
Code:
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:37:05 with 0 errors on Sun Apr 12 01:01:06 2020
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca05935b164-part2 ONLINE 0 0 0
wwn-0x5000cca05934a01c-part2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000cca05936d108-part2 ONLINE 0 0 0
wwn-0x5000cca059346164-part2 ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
wwn-0x5002538050002f4a-part2 ONLINE 0 0 0
wwn-0x5002538050002e22-part2 ONLINE 0 0 0
cache
wwn-0x5002538050002f4a-part3 ONLINE 0 0 0
wwn-0x5002538050002e22-part3 ONLINE 0 0 0
errors: No known data errors
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-3
pve-kernel-helper: 6.1-3
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-4.13: 5.2-2
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-3-pve: 4.13.16-50
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.4-1-pve: 4.13.4-26
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-11
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-4
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-2
qemu-server: 6.1-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
Any ideas ?