Not using tmpfs much at all in this situation. though ill keep an eye on the systemd journal in /run
#df -k `grep tmpfs /proc/mounts | awk '{print $2}'`
Filesystem 1K-blocks Used Available Use% Mounted on
none 492 0 492 0% /dev
tmpfs 65997564 0...
This is ridiculous, this container is running a simple apache and mailman with a gig of ram:
total used free shared buffers cached
Mem: 1048576 1048464 112 920736 0 921504
-/+ buffers/cache: 126960 921616
Swap...
There is a very serious issue with LXC and ram usage under ZFS. OOMkiller is running constantly on containers that had ram from 5.x that was just fine for them forever. Suddenly i need to add 1, 2 or even 4GB to all containers.
I think caches are being accounted for to the CT's, oomkiller comes...
Namely my question is "why is there an ip networking configuration not supported in lxc/*.conf files"?
Please let me know how to set SCOPE LINK in a lxc/*.conf file.
I am not upgrading or restarting lxcfs.service to cause this. It seems to happen by itself. (Unless OOM pressure has killed it and it has restarted itself or something similar?) Will investigate more.
Ok upgraded to PVE6.3 and this is still happening.
/proc went missing in this container when I pve enter.
pve-manager/6.3-2/22f57405 (running kernel: 5.4.73-1-pve)
problem is I fear that restarting daemons or anything else from this shell blindly may also have them inherit this broken...
This continues to happen on multiple 5.x hosts where /proc disappears to certain contexts (ie pct enter for eg, while SSH in works -- probably because SSH is inheriting a working context from the daemon whereas pct enter is a new context initliaization). Will report if it occurs in 6.x. Dont...
You can force drop your caches, per my instructions:
"Why isnt the arc_max setting honoured on ZFs on Linux" https://serverfault.com/a/833338/261576
Dropping arc cache too low results in a garbage-collection like situation from time to time with a zfs_arc* process consuming lots of cpu and...
btw friend suggest alt solution, but does not work: ip route add default via 192.168.55.1 src $Q
So SCOPE seems to be the way to do this.
The host doenst seem to route the IP's back (test ping to something on local physical networ on foreign subnet, it sees pings and replies via route back...
top level router splits traffic to site 1 and site 2 based on dest ip. 1 and 2 communicate via this router at layer 3 with routing, there's no opportunity/ability to vlan or otherwise share a broadcast ethernet between them. (I am also not interested in GRE tunnels, etc)
nonetheless I am...
An ancient container I inherited in a /25 at location 1 with ip Q on host Z needs to be moved to location 2 on host Y and retain ip Q. We cannot move the /25, there are other hosts+vms+cts on it at 1. We can only route Q/32 to Y.
The CT's software cannot be touched or reconfigured or otherwise...
This is the second server with this issue now.
I mentioned it here before:
https://forum.proxmox.com/threads/pct-list-not-working.59820/#post-281223
when I 'pct enter' the lxc, there's no /proc, this causes a lot of problems. It seems a customer sshing as a user then su'ing to root also saw...
This is a more serious issue: I cannot create new containers and the console is partly broken/not reporting anything about any containers (all greyed out) when this container is running.
To add a container, I had to shut down the offending one temporarily.
Any hints?
This seems related to an issue Im having again now:
https://forum.proxmox.com/threads/pct-list-broken-due-to-container-problem-cant-open-sys-fs-cgroup-cpuacct-lxc-673-ns-cpuacct-stat.68430/
A few days ago I added some sound loopback (aloop) stuff to run jitsi in a container. I put this into the lxc/673.conf
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow = c 116:2 rwm
lxc.cgroup.devices.allow = c 116:4 rwm
lxc.cgroup.devices.allow = c 116:3 rwm
lxc.cgroup.devices.allow =...
In my case it was trying to add an IP via container /etc/pve/lxc/*.conf to a vmbr bridge that didnt exist. The /etc/network/interfaces of the host had an error and the bridge didnt exist.
In my case it was trying to add an IP via container /etc/pve/lxc/*.conf to a vmbr bridge that didnt exist. The /etc/network/interfaces of the host had an error and the bridge didnt exist.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.