According to https://forum.proxmox.com/threads/proxmox-5-1-pty-allocation-request-failed-on-channel-0.43460/
you must remove fstab entries regarding /dev/pts (and probably others, like tmpfs) from vz containers to convert to lxc under newer proxmox, or you wont get pty's and you will get this...
Ive seen other posts around the internet about people hunting for the culprit and finding memory leaks in various apps. If it's not your tmpfs (and there are other tmpfs users other than systemd-journald, note), then it's something else. Keep looking.
This misleading issue is reporting it as buffers/cache instead of in "used". If you google it, there are dozens of posts by others misled by this and trying to flush their caches, which will not help at all. "Used" would be a fairer category to put it in, but Im guessing because of the way...
You may not want or need to drop caches -- 'buffers/cache' is also charged for TMPFS usage (misleadingly by LXCFS) - and tmpfs's cannot be evicted from ram.
What's using all the tmpfs ram? In /run we have systemd-journald logs. And systemd provides no sane default for vacuuming logs.
see...
Ok, after more reading I was wrong. While OOMkiller may be held at bay by LXC/LXD defaults of 90% ram use thresholds, it cannot evict TMPFS pages.
How is buffers/cache related to TMPFS? There's a misleading 'feature' in LXCFS reporting TMPFS as 'buffers/cache' ram. So no amount of researching...
Note that doing so will clear the caches/buffers for the entire host and dump ARC if you're using ZFS. This will spike read usage on your disks, since there's no more read cache available, and it will start repopulating as blocks are read. It's a horrible solution, honestly. This is a major bug...
Workaround for now is the echo 1 > /proc/sys/vm/drop_caches (you can keep some caches by not using echo 3), but, major performance hit.
This is the same issue and there's more because it's a common problem, should merge these threads...
It's not fixed in 7.1 and you cant drop caches in a CT, only for the whole host. On my 256GB host with 96GB of arc, that seems like a massive cache loss/performance hit.
#echo 1 > /proc/sys/vm/drop_caches
bash: /proc/sys/vm/drop_caches: Read-only file system
These are basically the same...
OK! Finally caught this happening. It's NOT systemd-journal.*
*EDIT: (Actually, it is, but in a non obvious way. See the further down in this thread to post-472649)
I started logging mem and top on the container continuously to a log every 15 seconds or so, and this is the last top output just...
This seems related to buffers/cache filling up in the container until OOM killer runs:
https://forum.proxmox.com/threads/continuously-increasing-memory-usage-until-oom-killer-kill-processes.67666
Turns out this is a VNC server implimentation failure - tightvncserver does not impliment some Xkeyboard features qt5 needs.
(I never put BMC/ILO on external networks, always on RFC1918 ips, and I never route them/give them a gateway for security reasons. Instead I put a jumphost on the bmc...
There's only hotkeys in the virtual keyboard and only settings are US101 vs JP-something keyboards (I did try setting to JP and back and didnt help).
Apparently a windows box attached directly to the BMC/ILO lan works just fine. Ill have to use the damn vpn (vs portforwarding to a vnc session...
Keyboard works for me here except random keystrokes are sent, including ctrl keys (for eg, i have discovered '5' = 'backspace'). I can almost get through all the menus with cut-and-pasting random stuff in (that i can change once it's installed after i ssh in with a cut-n-pasted root password...
But its easy to see the behaviour regardless -- just watch buffers on the container, and when they get to be similar size as the free ram (~70-80%) you are running into problems if your container is >20% of ram usage. Something's gotta give, and buffers arent evicted, OOM is awakened instead.
I...
More examples:
total used free shared buff/cache available
Mem: 4194304 1883736 36 2305568 2310532 2310568
Swap: 5242880 1048532 4194348
This container is really irritating to use because it pauses all the time...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.