LXC + zSwap = possible troubles

harvie

Well-Known Member
Apr 5, 2017
138
22
58
34
I had following options on kernel commandline of my proxmox ve:

zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=20 zswap.zpool=z3fold

And i noticed that swap in LXC guests and overall memory management behaves rather weird (having guest swap bigger than i set, containers using much more than 100% of their assigned swap). So i disabled this again, i hope these problems will be fixed by that.

Eg.:

upload_2018-5-29_17-55-38.png

i think zSwap is cool, but it changes behaviour of swap and memory and so does LXC, so there's probably some clash between these technologies right now as they are not made to cooperate.

Not sure if this will get fixed, but i just wanted to warn you about this.
 
After a quick search it seems you'r the first guy all over the internet that tried LXC with zswap :)

As the LXC toolkit doesn't support zswap I'm aren't sure if we gonna support that.
 
I know. It's not proxmox's fault that LXC does not work properly with zSwap (well it kinda works, i've been running it for months, but it's certainly not stable as i noticed that few containers with higher memory usage were running OOM more often with zSwap enabled).

Maybe this can be reported upstream as both LXC and zSwap are mainline kernel features, but i am not really sure if i have time to do all the debugging.

There's also theory that in fact LXC plays nicely with zSwap and only reason, why i've been running into problems are these:
- CTs were showing 130% swap usage, when they actually only used 100% but fitted more into swap due to the compression
- CTs were running into OOM more often, because they don't have enough RAM and enabling zSwap caused some of it to be occupied by zSwap, so they were forced to swap, which resulted in even more swap usage.
Not sure if this is what happend. Maybe the problems are more complex.

I guess that workaround might be to use ZRAM (formerly CompCache) instead of ZSwap, since it does not interferre with memory and swap allocation. It has some downsides when compared to ZSwap, but it's more foolproof as it only creates compressed blockdevice (/dev/zram0) in RAM, which can later be used as swap (mkswap, swapon). (Kinda like TmpFS, but blockdevice rather than filesystem). This means that compression will be done on host side and will not screw with guest memory accounting. So i think this will work without messing up LXC.

Anyway i didn't expected you to fix this. Just wanted to share my findings.
 
Last edited:
- CTs were showing 130% swap usage, when they actually only used 100% but fitted more into swap due to the compression
I'd bet on that. Also note that LXC has nothing to do with the way swap or zswap is implemented in the kernel and no:
both LXC and zSwap are mainline kernel features
lxc is not a kernel feature. All lxc does is configure the cgroup limits which are used by the kernel to make decisions. If the kernel implementation of swap-limits means the amount of swap after compression, then there's nothing lxc can do about it.

(Of course, if there's a way to query the usage for both before and after compression then support for that could be added one way or another, but I haven't taken a closer look at zswap, yet, so I don't know.)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!