I think it might be related to the debian repo because it gets stuck at
0% [Connecting to ftp.ca.debian.org (192.175.120.168)]
and sometime it works without any issue.
Hit:1 http://security.debian.org stretch/updates InRelease
Hit:2 https://enterprise.proxmox.com/debian/pve stretch InRelease...
Hello,
Since yesterday, we are getting the following error on multiple servers when running pveupdate:
command 'apt-get update' failed: exit code 100
Manually running apt-get update works fine. We always used apt-get dist-upgrade when upgrading.
We are using the paid entreprise repo.
Any idea...
Hello,
There seems to be a bug when creating a new container where the container is not reachable until you log in the server using the console. I've tested this on multiple servers with the same results.
Steps to reproduce:
1-Create a new CT
2-Start the new CT
3-Ping the CT (Should timeout)...
It really depends on what your container can afford and the logging you want to keep.
For example, if your container as 2GB ram and you allocate 3GB to log journaling then the tmpfs will get full and the OOM will be called as the host will think the container use 3GB of ram of the 2GB...
Right but at least it will prevent tools like lxc-info to report 100% ram usage when in fact it is not which is a big deal for people using such tools to monitor ram usage from production systems like us.
Like you said, we should monitor the ram usage from the host using the built in tools and...
Hey Alwin,
Yes I'm already using lxc-info to monitor my container ram usage. After some research if found this bug on the lxcfs github that was resolved 2 months ago and it seems exactly like my issue.
The issue is the following:
Here's the thread: https://github.com/lxc/lxcfs/issues/175...
I'm not sure I understand correctly. By tool you mean services running inside the container ? The issue is the same across all containers on the same host. I mean the shared memory will be the same on Container A,B and C if they are located on the same host.
How I'm I supposed to use the...
Hello,
It's seems the shared memory from an LXC container shows the shared memory from the host.
Here's the host:
total used free shared buffers cached
Mem: 64414 62599 1815 1833 9329 31029
-/+ buffers/cache: 22241...
This is weird as it worked fined for me and the logging kept working. The post mentions some changes (see below) in 2016 related to the daemon. Maybe this was included in Ubuntu 16.
You are right, it is worded so you might think it only operate on disk but after testing it does in fact cleanup /run/log/journal
So we could use the following:
First cleanup old entries:
journalctl --vacuum-size=50M
Then edit the config
vi /etc/systemd/journald.conf
At last, restart to...
Could we do this after editing journald.conf instead ?
journalctl --vacuum-size=50M
systemctl restart systemd-journald
Thanks. This will indeed help others.
You are right. It did make a good difference.
Before:
total used free shared buff/cache available
Mem: 2048 1057 6 16652 983 6
Swap: 0 0 0
Filesystem Size...
Also using the Ubuntu 16.04 template and same issue but tempfs doesn't seems full:
Filesystem Size Used Avail Use% Mounted on
rpool/data/subvol-121-disk-1 10G 1.1G 9.0G 11% /
none 492K 0 492K 0% /dev
tmpfs 63G...
Dumping more logs of another instance that happened this morning if it can help.
[2018-01-19 00:01:55] Process accounting resumed
[2018-01-19 11:13:54] apache2 invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=0
[2018-01-19 11:13:54] apache2...
Got the issue again on 2 others containers. Seems to only happen on containers that have a low ram limit.
Both machine have 1 GB of ram and are Ubuntu 16.04
Here's one of them
[3174928.549171] Memory cgroup out of memory: Kill process 2663 (mysqld) score 112 or sacrifice child...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.