Search results

  1. Continuously increasing memory usage until oom-killer kill processes

    More examples: total used free shared buff/cache available Mem: 4194304 1883736 36 2305568 2310532 2310568 Swap: 5242880 1048532 4194348 This container is really irritating to use because it pauses all the time...
  2. Continuously increasing memory usage until oom-killer kill processes

    Not using tmpfs much at all in this situation. though ill keep an eye on the systemd journal in /run #df -k `grep tmpfs /proc/mounts | awk '{print $2}'` Filesystem 1K-blocks Used Available Use% Mounted on none 492 0 492 0% /dev tmpfs 65997564 0...
  3. Continuously increasing memory usage until oom-killer kill processes

    This is ridiculous, this container is running a simple apache and mailman with a gig of ram: total used free shared buffers cached Mem: 1048576 1048464 112 920736 0 921504 -/+ buffers/cache: 126960 921616 Swap...
  4. Continuously increasing memory usage until oom-killer kill processes

    There is a very serious issue with LXC and ram usage under ZFS. OOMkiller is running constantly on containers that had ram from 5.x that was just fine for them forever. Suddenly i need to add 1, 2 or even 4GB to all containers. I think caches are being accounted for to the CT's, oomkiller comes...
  5. Force LXC CT to use secondary IP for all outbound connections? (how to force scope LINK?)

    Namely my question is "why is there an ip networking configuration not supported in lxc/*.conf files"? Please let me know how to set SCOPE LINK in a lxc/*.conf file.
  6. /proc and /sys missing for pct enter container but exists for ssh session in

    I am not upgrading or restarting lxcfs.service to cause this. It seems to happen by itself. (Unless OOM pressure has killed it and it has restarted itself or something similar?) Will investigate more.
  7. /proc and /sys missing for pct enter container but exists for ssh session in

    Ok upgraded to PVE6.3 and this is still happening. /proc went missing in this container when I pve enter. pve-manager/6.3-2/22f57405 (running kernel: 5.4.73-1-pve) problem is I fear that restarting daemons or anything else from this shell blindly may also have them inherit this broken...
  8. PCT list not working

    This continues to happen on multiple 5.x hosts where /proc disappears to certain contexts (ie pct enter for eg, while SSH in works -- probably because SSH is inheriting a working context from the daemon whereas pct enter is a new context initliaization). Will report if it occurs in 6.x. Dont...
  9. [PVE 6.0] Cannot set ZFS arc_min and arc_max

    You can force drop your caches, per my instructions: "Why isnt the arc_max setting honoured on ZFs on Linux" https://serverfault.com/a/833338/261576 Dropping arc cache too low results in a garbage-collection like situation from time to time with a zfs_arc* process consuming lots of cpu and...
  10. Force LXC CT to use secondary IP for all outbound connections? (how to force scope LINK?)

    btw friend suggest alt solution, but does not work: ip route add default via 192.168.55.1 src $Q So SCOPE seems to be the way to do this. The host doenst seem to route the IP's back (test ping to something on local physical networ on foreign subnet, it sees pings and replies via route back...
  11. Force LXC CT to use secondary IP for all outbound connections? (how to force scope LINK?)

    top level router splits traffic to site 1 and site 2 based on dest ip. 1 and 2 communicate via this router at layer 3 with routing, there's no opportunity/ability to vlan or otherwise share a broadcast ethernet between them. (I am also not interested in GRE tunnels, etc) nonetheless I am...
  12. Force LXC CT to use secondary IP for all outbound connections? (how to force scope LINK?)

    An ancient container I inherited in a /25 at location 1 with ip Q on host Z needs to be moved to location 2 on host Y and retain ip Q. We cannot move the /25, there are other hosts+vms+cts on it at 1. We can only route Q/32 to Y. The CT's software cannot be touched or reconfigured or otherwise...
  13. jitsi breaks pct list and pve web console: can't open '/sys/fs/cgroup/cpuacct/lxc/673/ns/cpuacct.stat'

    Bump? Zoom is politically compromised, jitsi is becoming a serious application used by customers. Please advise.
  14. lxc issues with proc disappearing to certain processes

    This is the second server with this issue now. I mentioned it here before: https://forum.proxmox.com/threads/pct-list-not-working.59820/#post-281223 when I 'pct enter' the lxc, there's no /proc, this causes a lot of problems. It seems a customer sshing as a user then su'ing to root also saw...
  15. jitsi breaks pct list and pve web console: can't open '/sys/fs/cgroup/cpuacct/lxc/673/ns/cpuacct.stat'

    This is a more serious issue: I cannot create new containers and the console is partly broken/not reporting anything about any containers (all greyed out) when this container is running. To add a container, I had to shut down the offending one temporarily. Any hints?
  16. jitsi breaks pct list and pve web console: can't open '/sys/fs/cgroup/cpuacct/lxc/673/ns/cpuacct.stat'

    proxmox-ve: 5.4-2 (running kernel: 4.15.18-21-pve) pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) pve-kernel-4.15: 5.4-9 pve-kernel-4.15.18-21-pve: 4.15.18-48 pve-kernel-4.15.18-12-pve: 4.15.18-36 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon...
  17. cpanel dovecot resource issue with apparmor

    This seems related to an issue Im having again now: https://forum.proxmox.com/threads/pct-list-broken-due-to-container-problem-cant-open-sys-fs-cgroup-cpuacct-lxc-673-ns-cpuacct-stat.68430/
  18. jitsi breaks pct list and pve web console: can't open '/sys/fs/cgroup/cpuacct/lxc/673/ns/cpuacct.stat'

    A few days ago I added some sound loopback (aloop) stuff to run jitsi in a container. I put this into the lxc/673.conf lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow = c 116:2 rwm lxc.cgroup.devices.allow = c 116:4 rwm lxc.cgroup.devices.allow = c 116:3 rwm lxc.cgroup.devices.allow =...
  19. ERROR lxc_network - network.c:instantiate_veth:130 - Failed to create veth pair

    In my case it was trying to add an IP via container /etc/pve/lxc/*.conf to a vmbr bridge that didnt exist. The /etc/network/interfaces of the host had an error and the bridge didnt exist.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!