Hello,
I was wondering if maybe the memory reported in the webUI for LXC containers is wrong.
Here's a 'free -m' from a container with 2GB ram and 0 swap
You can see that the shared values is 2700 which is more than the available total memory for this container. I'm not sure if this is normal.
This also cause issue in htop
This value seems to be from /proc/meminfo under Shmem
Therefore when using softwares with graphs like librenms, it reports that the ram usage is at 100% constantly while the webUi report like less than 5%.
Any idea ?
I was wondering if maybe the memory reported in the webUI for LXC containers is wrong.
Here's a 'free -m' from a container with 2GB ram and 0 swap
Code:
total used free shared buff/cache available
Mem: 2048 978 9 2757 1060 9
Swap: 0 0 0
You can see that the shared values is 2700 which is more than the available total memory for this container. I'm not sure if this is normal.
This also cause issue in htop
This value seems to be from /proc/meminfo under Shmem
Therefore when using softwares with graphs like librenms, it reports that the ram usage is at 100% constantly while the webUi report like less than 5%.
Any idea ?
Code:
proxmox-ve: 5.1-30 (running kernel: 4.13.8-2-pve)
pve-manager: 5.1-38 (running version: 5.1-38/1e9bc777)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.8-3-pve: 4.13.8-30
libpve-http-server-perl: 2.0-7
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-22
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-3
pve-container: 2.0-17
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
libpve-apiclient-perl: 2.0-2
Last edited: