Search results

  1. M

    LXC and tvheadend - Continuity counter error

    Check your tmpfs usage, its charged to ram for the container. My culprit was systemd-journald, had to put in a crontab to vacuum the logs out.
  2. M

    PTY allocation request failed on channel 0 on Centos

    According to https://forum.proxmox.com/threads/proxmox-5-1-pty-allocation-request-failed-on-channel-0.43460/ you must remove fstab entries regarding /dev/pts (and probably others, like tmpfs) from vz containers to convert to lxc under newer proxmox, or you wont get pty's and you will get this...
  3. M

    Continuously increasing memory usage until oom-killer kill processes

    Ive seen other posts around the internet about people hunting for the culprit and finding memory leaks in various apps. If it's not your tmpfs (and there are other tmpfs users other than systemd-journald, note), then it's something else. Keep looking.
  4. M

    Continuously increasing memory usage until oom-killer kill processes

    This misleading issue is reporting it as buffers/cache instead of in "used". If you google it, there are dozens of posts by others misled by this and trying to flush their caches, which will not help at all. "Used" would be a fairer category to put it in, but Im guessing because of the way...
  5. M

    Continuously increasing memory usage until oom-killer kill processes

    That said, is there a way to restrict tmpfs size in a container directly? Sane defaults of half the operating ram unless overridden would be nice.
  6. M

    LXC Guest not clearing memory buffers/cache

    You may not want or need to drop caches -- 'buffers/cache' is also charged for TMPFS usage (misleadingly by LXCFS) - and tmpfs's cannot be evicted from ram. What's using all the tmpfs ram? In /run we have systemd-journald logs. And systemd provides no sane default for vacuuming logs. see...
  7. M

    Continuously increasing memory usage until oom-killer kill processes

    Ok, after more reading I was wrong. While OOMkiller may be held at bay by LXC/LXD defaults of 90% ram use thresholds, it cannot evict TMPFS pages. How is buffers/cache related to TMPFS? There's a misleading 'feature' in LXCFS reporting TMPFS as 'buffers/cache' ram. So no amount of researching...
  8. M

    Continuously increasing memory usage until oom-killer kill processes

    Note that doing so will clear the caches/buffers for the entire host and dump ARC if you're using ZFS. This will spike read usage on your disks, since there's no more read cache available, and it will start repopulating as blocks are read. It's a horrible solution, honestly. This is a major bug...
  9. M

    Continuously increasing memory usage until oom-killer kill processes

    Workaround for now is the echo 1 > /proc/sys/vm/drop_caches (you can keep some caches by not using echo 3), but, major performance hit. This is the same issue and there's more because it's a common problem, should merge these threads...
  10. M

    LXC Guest not clearing memory buffers/cache

    It's not fixed in 7.1 and you cant drop caches in a CT, only for the whole host. On my 256GB host with 96GB of arc, that seems like a massive cache loss/performance hit. #echo 1 > /proc/sys/vm/drop_caches bash: /proc/sys/vm/drop_caches: Read-only file system These are basically the same...
  11. M

    Continuously increasing memory usage until oom-killer kill processes

    OK! Finally caught this happening. It's NOT systemd-journal.* *EDIT: (Actually, it is, but in a non obvious way. See the further down in this thread to post-472649) I started logging mem and top on the container continuously to a log every 15 seconds or so, and this is the last top output just...
  12. M

    Continuously increasing memory usage until oom-killer kill processes

    systemd-journal could also be a culprit. 50 root 20 0 284232 181392 178368 S 0.0 11.5 1:46.72 systemd-journal huge and the...
  13. M

    How to run a script in a container?

    From what I understand <command> can have no options or arguments to it. Gets very complex if you want some.
  14. M

    lxc-ls no longer reporting ram usage in pve7.1

    #lxc-ls -fF NAME,STATE,RAM NAME STATE RAM 100 RUNNING 0.00MB 261 RUNNING 0.00MB 270 RUNNING 0.00MB proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve) pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe) pve-kernel-helper: 7.1-12 pve-kernel-5.13: 7.1-7...
  15. M

    [SOLVED] CTs used memory keeps growing until full

    This seems related to buffers/cache filling up in the container until OOM killer runs: https://forum.proxmox.com/threads/continuously-increasing-memory-usage-until-oom-killer-kill-processes.67666
  16. M

    Keyboard and mouse input not working with PVE 7.0 Installer and HP iLO4

    Turns out this is a VNC server implimentation failure - tightvncserver does not impliment some Xkeyboard features qt5 needs. (I never put BMC/ILO on external networks, always on RFC1918 ips, and I never route them/give them a gateway for security reasons. Instead I put a jumphost on the bmc...
  17. M

    Keyboard and mouse input not working with PVE 7.0 Installer and HP iLO4

    There's only hotkeys in the virtual keyboard and only settings are US101 vs JP-something keyboards (I did try setting to JP and back and didnt help). Apparently a windows box attached directly to the BMC/ILO lan works just fine. Ill have to use the damn vpn (vs portforwarding to a vnc session...
  18. M

    Keyboard and mouse input not working with PVE 7.0 Installer and HP iLO4

    Keyboard works for me here except random keystrokes are sent, including ctrl keys (for eg, i have discovered '5' = 'backspace'). I can almost get through all the menus with cut-and-pasting random stuff in (that i can change once it's installed after i ssh in with a cut-n-pasted root password...
  19. M

    Continuously increasing memory usage until oom-killer kill processes

    But its easy to see the behaviour regardless -- just watch buffers on the container, and when they get to be similar size as the free ram (~70-80%) you are running into problems if your container is >20% of ram usage. Something's gotta give, and buffers arent evicted, OOM is awakened instead. I...
  20. M

    Continuously increasing memory usage until oom-killer kill processes

    More examples: total used free shared buff/cache available Mem: 4194304 1883736 36 2305568 2310532 2310568 Swap: 5242880 1048532 4194348 This container is really irritating to use because it pauses all the time...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!