[SOLVED] elk stack does not start in lxc container

rusquad

Member
Feb 3, 2023
40
0
6
fresh installation of proxmox. this assembly works everywhere, I also tried it in proxmox itself. there are no startup logs, it just won't start. I don't know where to dig, it's been 2 days already. please i need help
* Starting periodic command scheduler cron ...done. * Starting Elasticsearch Server ...fail! waiting for Elasticsearch to be up (1/30) ... waiting for Elasticsearch to be up (30/30) Couldn't start Elasticsearch. Exiting. Elasticsearch log follows below. cat: /var/log/elasticsearch/elasticsearch.log: No such file or directory

arch: amd64 cores: 10 features: keyctl=1,nesting=1 hostname: elk memory: 10240 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=4A:58:5B:C0:A8:7D,ip=dhcp,type=veth ostype: ubuntu rootfs: local-lvm:vm-101-disk-0,size=40G swap: 0 unprivileged: 1

elk: container_name: elk image: sebp/elk:8.3.3 ports: - "5601:5601" - "9200:9200" - "5044:5044"
 
Last edited:
I found this
dmesg on proxmox:
Code:
[152221.993933] [ 536988] 100991 536988  8744226  2561637 20787200        0             0 java
[152221.993936] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=docker-f1a654f9ac2afd0ac2fdc016d144ef448b859215a14a8d6059e8a4d3d6a64188.scope,mems_allowed=0-1,oom_memcg=/lxc/101,task_memcg=/lxc/101/ns/system.slice/docker-f1a654f9ac2afd0ac2fdc016d144ef448b859215a14a8d6059e8a4d3d6a64188.scope,task=java,pid=536988,uid=100991
[152221.993975] Memory cgroup out of memory: Killed process 536988 (java) total-vm:34976904kB, anon-rss:10246548kB, file-rss:0kB, shmem-rss:0kB, UID:100991 pgtables:20300kB oom_score_adj:0
 
Last edited:
Сколько оперативной памяти вы выделили для lxc и сколько для эластика, логасташа и кибаны?
lxc 10gb, specifically for containers there are no restrictions. what do I need to do?
 
thanks, limited ES_HEAP_SIZE to 3GB. but how can i increase the allocated memory size? so that OOM does not kill java vm
 
Last edited:
To my knowledge,the quota is set by the RAM given to the lxc,so there isn't a quota beside that. ES is a java based application, so it takes RAM from Xmx/Xms variables, but it also uses ram for filesystem/Lucene caching.
 
  • Like
Reactions: rusquad
Yep - completely agree, the best solution is to be explicit about the amount of memory you are giving a java process (-Xmx, -Xms, ES_HEAP_SIZE in this specific case).

That said, java has conservative ergonomic defaults (uses max heap size of 25% memory), and it's unfortunate that OOM is happening when the heap size is not explicit. I suspect the cgroups issue I linked is ultimately the cause.

For example, let's say I have a 64GB host, and set the container to 4GB.

If java is able to see the cgroup limit of 4GB, it will set the max heap size to 1GB by default.

If java is unable to see the cgroup limit and sees the 64GB host limit, it will set the max heap size to 16GB by default. I suspect this is the case, and then bad things happen once the 16GB java process tries to allocate past the 4GB cgroup limit.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!