LXC memory monitoring with Zabbix

avekrivoy

Member
May 19, 2017
12
0
21
34
Hi!
I'm using Zabbix to monitor my Proxmox containers and I faced the problem.
Zabbix item for getting total memory vm.memory.size[total] returns amount of mem on Proxmox host instead of memory inside a container. Although, value MemTotal from /proc/meminfo is correct.
Also, Zabbix uses MemAvailable as actual amount of free ram for host. But in the containers MemAvailable equals MemFree.

Here is listing from LXC-node:
Code:
cat /proc/meminfo 
MemTotal:        3121152 kB
MemFree:          641612 kB
MemAvailable:     641612 kB
Buffers:               0 kB
Cached:          2437208 kB
SwapCached:            0 kB
Active:          1254132 kB
Inactive:        1225316 kB
Active(anon):       6080 kB
Inactive(anon):    36560 kB
Active(file):    1248052 kB
Inactive(file):  1188756 kB
Unevictable:           0 kB
Mlocked:            3520 kB
SwapTotal:        524288 kB
SwapFree:         471924 kB
Dirty:               252 kB
Writeback:             0 kB
AnonPages:       1212060 kB
Mapped:           137808 kB
Shmem:            161448 kB
Slab:               0 kB
SReclaimable:          0 kB
SUnreclaim:            0 kB
KernelStack:        8592 kB
PageTables:        42660 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     8734824 kB
Committed_AS:    5521492 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      175680 kB
DirectMap2M:    16592896 kB

I tried to parse output of /proc/meminfo and calculate actual free memory as sum of MemFree, Buffers and Cached, but as you can see, Buffers = 0Kb, and again Zabbix gets value for Proxmox host.
Could you explain this behavior?
How do I monitor ram in containers?

I'm using Debian Jessie 8.7 for Proxmox.
Zabbix 3.2.6
Debian Jessie as a LXC OS.
Code:
uname -r
4.4.59-1-pve
 
TBH it's not that it's a "lxc limitation", it's more of a how (where) zabbix extracts the data from the running instance: /proc/meminfo has the correct numbers, and the sysinfo (which zabbix uses) does not.
 
TBH it's not that it's a "lxc limitation", it's more of a how (where) zabbix extracts the data from the running instance: /proc/meminfo has the correct numbers, and the sysinfo (which zabbix uses) does not.
Then why /proc/meminfo shows MemTotal amount of the host, not the LXC container?
 
Then why /proc/meminfo shows MemTotal amount of the host, not the LXC container?
Probably you're running the obsolete LXC version that has a bug.

Container:
# cat /proc/meminfo
MemTotal: 1048576 kB
MemFree: 921108 kB
MemAvailable: 921108 kB

Host:
# cat /proc/meminfo
MemTotal: 6110864 kB
MemFree: 139456 kB
MemAvailable: 1201884 kB

ii liblxc1 2.0.8-0ubuntu1~16.04.2 amd64 Linux Containers userspace tools (library)
ii lxc-common 2.0.8-0ubuntu1~16.04.2 amd64 Linux Containers userspace tools (common tools)
ii lxcfs 2.0.7-0ubuntu1~16.04.1 amd64 FUSE based filesystem for LXC
ii lxd 2.0.10-0ubuntu1~16.04.1 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 2.0.10-0ubuntu1~16.04.1 amd64 Container hypervisor based on LXC - client

PS: I'm not using proxmox for running lxc, it's just an ubuntu machine.
 
Probably you're running the obsolete LXC version that has a bug.

Container:
# cat /proc/meminfo
MemTotal: 1048576 kB
MemFree: 921108 kB
MemAvailable: 921108 kB

Host:
# cat /proc/meminfo
MemTotal: 6110864 kB
MemFree: 139456 kB
MemAvailable: 1201884 kB

ii liblxc1 2.0.8-0ubuntu1~16.04.2 amd64 Linux Containers userspace tools (library)
ii lxc-common 2.0.8-0ubuntu1~16.04.2 amd64 Linux Containers userspace tools (common tools)
ii lxcfs 2.0.7-0ubuntu1~16.04.1 amd64 FUSE based filesystem for LXC
ii lxd 2.0.10-0ubuntu1~16.04.1 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 2.0.10-0ubuntu1~16.04.1 amd64 Container hypervisor based on LXC - client

PS: I'm not using proxmox for running lxc, it's just an ubuntu machine.
I'm running Proxmox based on Debian Jessie:
lxc-pve - 2.0.7-4
lxcfs - 2.0.6-pve1
Hope they fixed this behavior in 2.0.8 release.
And what about /proc/loadavg? Because I have the same issue with it
 
For memory you may create a user parameter (that's what I did) and read from /proc/meminfo
 
For memory you may create a user parameter (that's what I did) and read from /proc/meminfo
That's what I tried to do until I stuck on that TotalMem bug)
Anyway, LA parameter is much more important for me. Don't want to update and reboot prod servers because only memtotal issue fix.
 
Worked partially.

For me the problem is the ALPINE LXC, free gives the memory of the entire proxmox and not the container itself... Ubuntu and Debian are ok with free, maybe the problem is the free version from alpine...
:~# free -V
BusyBox v1.35.0 (2022-11-19 10:13:10 UTC) multi-call binary.
Usage: free [-bkmgh]
Display free and used memory

the /proc/meminfo has the right stuff... but im not a linux exp to get the correct values from there...
 
For me the problem is the ALPINE LXC, free gives the memory of the entire proxmox and not the container itself... Ubuntu and Debian are ok with free, maybe the problem is the free version from alpine...
:~# free -V
BusyBox v1.35.0 (2022-11-19 10:13:10 UTC) multi-call binary.
Usage: free [-bkmgh]
Display free and used memory
Yes, that's the case. I also went down the rabbit hole on this one and looked at the procps code. Alpine is mainly based on busybox. The package for Alpine Linux is also called procps, so you can install that and it is working:

Code:
root@gateway-vserver ~ > free -m
              total        used        free      shared  buff/cache   available
Mem:          64145       13642       50493        1819           9         124
Swap:          9639         564        9075

root@gateway-vserver ~ > apk add procps
fetch http://ftp.halifax.rwth-aachen.de/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
(1/3) Installing libintl (0.21-r2)
(2/3) Installing libproc (3.3.17-r1)
(3/3) Installing procps (3.3.17-r1)
Executing busybox-1.35.0-r17.trigger
OK: 22 MiB in 58 packages

root@gateway-vserver ~ > free -m
               total        used        free      shared  buff/cache   available
Mem:             128           4         118           0           5         123
Swap:             64           0          64
 
Yes, that's the case. I also went down the rabbit hole on this one and looked at the procps code. Alpine is mainly based on busybox. The package for Alpine Linux is also called procps, so you can install that and it is working:

Code:
root@gateway-vserver ~ > free -m
              total        used        free      shared  buff/cache   available
Mem:          64145       13642       50493        1819           9         124
Swap:          9639         564        9075

root@gateway-vserver ~ > apk add procps
fetch http://ftp.halifax.rwth-aachen.de/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
(1/3) Installing libintl (0.21-r2)
(2/3) Installing libproc (3.3.17-r1)
(3/3) Installing procps (3.3.17-r1)
Executing busybox-1.35.0-r17.trigger
OK: 22 MiB in 58 packages

root@gateway-vserver ~ > free -m
               total        used        free      shared  buff/cache   available
Mem:             128           4         118           0           5         123
Swap:             64           0          64
You are awesome! Saved the day! Tnx also nice that other people can have the answers to this kind of problem...
 
also nice that other people can have the answers to this kind of problem...
The problem with this is, that it is not a Proxmox VE problem, it's a guest problem, because PVE only runs third-party code. Alpine, or more precisely the used software busybox should fix it, in order that it would work out of the box. The main reason for using Alpine is the small footprint and it's kind of weird to load Alpine Linux fully up with other tools just in order to get some numbers right.

You are awesome! Saved the day! Tnx
You're welcome!
 
The problem with this is, that it is not a Proxmox VE problem, it's a guest problem, because PVE only runs third-party code. Alpine, or more precisely the used software busybox should fix it, in order that it would work out of the box. The main reason for using Alpine is the small footprint and it's kind of weird to load Alpine Linux fully up with other tools just in order to get some numbers right.
Yes indeed i am not justifying it as a proxmox problem, far from that... is an alpine problem... but the thread was primarly to fix that containers issue with zabbix, that also has to solve the memory issue in the containers... haha
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!