i've just found this: https://pwning.systems/posts/escaping-containers-for-fun/
They simply set /proc/sys/kernel/core_pattern to execute user provided binary in host context by triggering coredump inside of privileged docker container.
Can this be done with privileged CTs on proxmox? Or is...
https://discuss.linuxcontainers.org/t/lxd-4-20-has-been-released/12540
LXD now has live migration. Perhaps recently the CRIUgenic technology has advanced a bit and Proxmox can start looking into this as well?
Syncthing is really cool, but i don't think this usecase would be currently supported by syncthing.
These are my concerns:
1.) File permissions, extended attributes and other advanced metadata might not fully sync
2.) Syncthing can only write files under single user/owner.
3.) Syncing database...
There is still some discussion about mainlining this: https://github.com/dolohow/uksm/issues/41#issuecomment-926282376
I think this might need fulltime developer for one or two months to get into upstream. But still might be well worth it for all the large scale PVE/LXC deployments out there...
even with cgroupv2 enabled?
this swap=mem+swap thing is absolutely messing with my setups for years... And this whole time i have very hard time defending this behaviour in our company. I like Proxmox very much, but people keep pushing Hyper-V and i will probably die inside little bit if i will...
Recently there was release of PVE 6.4 with improved cgroupv2 support, i wonder if that means that swap limit now works properly and independently from ram limit.
Thank you for great work guys, really hope you will keep going on!
Changelog mentions "Improved cgroup v2 (control group) handling.". Can i ask what exactly was improved? Would it be safe to start phasing this to the production servers now?
OK I've created ticket in bugzilla: https://bugzilla.proxmox.com/show_bug.cgi?id=3397
BTW can't really think about any downsides/drawbacks of using this for LXC. Do you have any?
https://kernelnewbies.org/Linux_5.12#ID_mapping_in_mounts
They just released Linux 5.12, which can remap UIDs/GIDs of mountpoints.
This is absolutely awesome feature which would mean that we don't really need to backup/restore or otherwise convert CT's filesystem when switching containers...
Any news on this? It seems that there was very interresting progress now that LXC 4.0 is implemented in Proxmox.
Do you think it is now safe to boot into cgroupv2 mode in production? (given that i run reasonably recent guest distros in CTs). I really need to be able to limit the swap per CT (or...
According to some docs, the swappiness cannot completely prevent cgroup from swapping when whole system is deprived of ram.
But hey. it still might be interresting idea to set lower swappiness for mission critical containers, so they stay in the RAM during the peak hours, while the...
You can check the swappiness was applied to individual CT by following command:
cat /sys/fs/cgroup/memory/lxc/<CTID>/memory.swappiness
But i let it run over night and according to proxmox web ui, the CT has swapped 100MB, so i am not sure if something really changed...
Probably can even use following line in /etc/lxc/default.conf :
lxc.cgroup.memory.swappiness = 0
But i am not sure... This does not seem to be applied to proxmox CTs.
OK, so i've added following line to /etc/pve/lxc/XXX.conf file:
lxc.cgroup.memory.swappiness: 0
And it seems to be working. Hopefully the swap is now disabled for that CT...
ZFS heavily trades IO performance for caching / RAM consumption. So if you have enough RAM, your IO will be even faster than before. If you don't have enough RAM, the IO will be slow and RAM occupied.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.