Is there reason for not enabling this by default in proxmox? I understand that some LXC users do not use LXC as VPS (eg. docker,...), but since proxmox is container based VPS implementation it makes sense to have as much isolation as possible.
Hi!
I put syslog errno 1 line into the /usr/share/lxc/config/common.seccomp file and it does perfect job preventing containers to see what's in dmesg:
# dmesg
dmesg: read kernel buffer failed: Operation not permitted
but i had recently found, that the kernel messages are getting to syslog, so...
Does this mean, that glusterfs would be able to run LXC directly in subdirectory (like ZFS), or i'd still have to use LXC disk images (like dir backend)? It would be nice if it was possible to somehow dynamicaly allocate space like ZFS subvols do, so we don't need any balooning for thin...
But if your customer sees loadavg of 10 in empty fresh CT with almost no services running it doesn't look very good.
It's not part of the kernel. It's part of lxcfs, which is afaik in usefspace (via fuse). However i hope we'll get this soon.
I've already pointed this out :) https://forum.proxmox.com/threads/separate-loadavg-for-individual-containers.44460/
Can't wait for this feature, as my nagios is raging with red numbers of loadavg since i've moved from openvz to lxc :)
LXCfs finaly has the per-container loadavg!
https://github.com/lxc/lxcfs/pull/237
https://github.com/lxc/lxcfs/commit/b04c86523b05f7b3229953d464e6a5feb385c64a
I wonder how long it will take to get to the Proxmox...
Actually i've already found that this is not the case and the problem is NOT related to zSwap:
https://forum.proxmox.com/threads/lxc-container-using-more-than-100-swap.31203/#post-211451
I think that LXC slightly changed swap size API and proxmox did not reflected the changes yet.
I have the same issue on PVE 5.2-1. At first i was thinking that it's caused by using zSwap, but it happens even after disabling the zSwap.
I set the CT to have 1G of ram and 256M of swap. I get this inside of CT:
# free -m
total used free shared buff/cache...
I know. It's not proxmox's fault that LXC does not work properly with zSwap (well it kinda works, i've been running it for months, but it's certainly not stable as i noticed that few containers with higher memory usage were running OOM more often with zSwap enabled).
Maybe this can be reported...
I had following options on kernel commandline of my proxmox ve:
zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=20 zswap.zpool=z3fold
And i noticed that swap in LXC guests and overall memory management behaves rather weird (having guest swap bigger than i set, containers using much...
When i click on datacenter -> search i see list of all containers in network. Also i can see memory and CPU usage of individual containers. I would like to see the swap size and usage in there, but there's no such option in columns menu. Can you please add it?
Also i guess it might be useful to...
I think how swap is handled by LXC is stupid. You swap on slow harddrive even when you have plenty of fast RAM space available on the guest. This eats up precious IO (and basically kills database performance for all containers). OpenVZ does it the opposite way. They have vSwap, which is rate...
Yes, it's owerwritten with each upgrade, since these files are the code of the Proxmox VE itself. It sux a big time, since the autodetection is too paranoid.
What's the point? User of CT will see less cpus, but with higher load. Unfortunately loadavg and CPU load is not overrided by lxcfs right now.
My users prefer CTs that are not overloaded (and report lower load). Not seing proper core count is more of cosmetic problem. Doesn't really make sense...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.