LXC container using more than 100% SWAP

mailinglists

Renowned Member
Mar 14, 2012
641
68
93
Hi guys,

I have PM 4.3-1/e7cdc165 and a few LXC containers on it. One of the LXC containers started using more than 100% SWAP space according to PM Web interface, as well as free. Here's how it looks from inside LXC. I can even get a crash with free:

Code:
[root@www log]# free
  total  used  free  shared  buff/cache  available
Mem:  24576000  8211196  15756116  565840  608688  15756116
Swap:  524288  584968  -60680
[root@www log]# free -m
  total  used  free  shared  buff/cache  available
Mem:  24000  8018  15387  552  594  15387
Swap:  512  571 18014398509481924
[root@www log]# free -h
  total  used  free  shared  buff/cache  available
Mem:  23G  7.8G  15G  552M  594M  15G
Segmentation fault

And from Web interface:
Code:
SWAP usage 111.57% (571.26 MiB of 512.00 MiB)

Root of LXC container is a ZFS ZVOL.
Swap for hypervisor (ie. ProxMox) resides on ext4 partition.
LXC container has 512MB of swap, as per default GUI creation values.
By the looks of our monintoring system it happened during the nightly backups.
Aside of free crashing the system seems to perform normally.

I guess this is a bug. Should I report it?
Has anyone ever seen anything like this before?
 
Hi guys,

I have PM 4.3-1/e7cdc165 and a few LXC containers on it. One of the LXC containers started using more than 100% SWAP space according to PM Web interface, as well as free. Here's how it looks from inside LXC. I can even get a crash with free:

Code:
[root@www log]# free
  total  used  free  shared  buff/cache  available
Mem:  24576000  8211196  15756116  565840  608688  15756116
Swap:  524288  584968  -60680
[root@www log]# free -m
  total  used  free  shared  buff/cache  available
Mem:  24000  8018  15387  552  594  15387
Swap:  512  571 18014398509481924
[root@www log]# free -h
  total  used  free  shared  buff/cache  available
Mem:  23G  7.8G  15G  552M  594M  15G
Segmentation fault

And from Web interface:
Code:
SWAP usage 111.57% (571.26 MiB of 512.00 MiB)

Root of LXC container is a ZFS ZVOL.
Swap for hypervisor (ie. ProxMox) resides on ext4 partition.
LXC container has 512MB of swap, as per default GUI creation values.
By the looks of our monintoring system it happened during the nightly backups.
Aside of free crashing the system seems to perform normally.

I guess this is a bug. Should I report it?
Has anyone ever seen anything like this before?

if it is still reproducable with the current version, then please report a bug.
 
I guess it will never be reproducible with the current version, because it takes time to show up and usually you release a new version faster than it shows up. :) After upgrading to 4.4 if it shows up again, i'll report a bug even if 4.5 is released by that time.
 
I guess it will never be reproducible with the current version, because it takes time to show up and usually you release a new version faster than it shows up. :) After upgrading to 4.4 if it shows up again, i'll report a bug even if 4.5 is released by that time.

I asked for this because there were memory attribution related bug fixes both in lxcfs and PVE ;) if it is triggered by backups, maybe you can speed up the reproduction by running a couple of backup jobs one after another (maybe with dd-ing some random data inbetween to invalidate caches)?
 
I have the same issue on PVE 5.2-1. At first i was thinking that it's caused by using zSwap, but it happens even after disabling the zSwap.

I set the CT to have 1G of ram and 256M of swap. I get this inside of CT:

upload_2018-5-31_17-7-48.png


# free -m
total used free shared buff/cache available
Mem: 1024 361 456 2723 206 662
Swap: 1280 345 934


config as follows:

arch: amd64
cores: 8
cpulimit: 1
hostname: XXXX
memory: 1024
net0: name=eth0,bridge=vmbr0,gw=XXXX,hwaddr=XXXX,ip=XXXX/24,tag=80,type=veth
onboot: 1
ostype: debian
rootfs: vps:subvol-212-disk-1,size=64G
swap: 256


looks like LXC does not obey the swap settings given by proxmox. Swap is oversized immediately after CT bootup.


UPDATE: I've figured the problem. The actual size of swap is always set as "RAM+SWAP" specified in proxmox. Therefore if you set 1G RAM and 256M swap, you wil get 1.25G of SWAP, but proxmox still thinks it's 256M, so shows more than 100% use.

Please fix this. I want to set swap lower than RAM.
 
Last edited:
I have the same issue on PVE 5.2-1. At first i was thinking that it's caused by using zSwap, but it happens even after disabling the zSwap.

I set the CT to have 1G of ram and 256M of swap. I get this inside of CT:

View attachment 7546


# free -m
total used free shared buff/cache available
Mem: 1024 361 456 2723 206 662
Swap: 1280 345 934


config as follows:

arch: amd64
cores: 8
cpulimit: 1
hostname: XXXX
memory: 1024
net0: name=eth0,bridge=vmbr0,gw=XXXX,hwaddr=XXXX,ip=XXXX/24,tag=80,type=veth
onboot: 1
ostype: debian
rootfs: vps:subvol-212-disk-1,size=64G
swap: 256


looks like LXC does not obey the swap settings given by proxmox. Swap is oversized immediately after CT bootup.


UPDATE: I've figured the problem. The actual size of swap is always set as "RAM+SWAP" specified in proxmox. Therefore if you set 1G RAM and 256M swap, you wil get 1.25G of SWAP, but proxmox still thinks it's 256M, so shows more than 100% use.

Please fix this. I want to set swap lower than RAM.

this is not really fixable until we have full cgroup v2 support.

the kernel only has a memory and a memory+swap limit right now, the only thing that changed recently was how lxcfs (and thus tools like free, top, ...) "containerizes" the numbers. previously, you could have a swap usage higher than the swap total (which depending on the tool might have been displayed as huge usage because of wrap-arounds/underflows), which was confusing as well. now the swap numbers reflect the actual memory+swap usage and limits. your container does not get more memory, it is just displayed differently. each byte that is counted as used memory is also counted as used swap, unlike on the host itself, where those are two different "buckets".
 
I have the same problem, any solution?

v5qAPQj.png



g3ka4gH.png
 
Me - Don't understand why. Running on Virtual Environment 6.1-7
 

Attachments

  • Screen Shot 2020-02-19 at 9.20.17 PM.png
    Screen Shot 2020-02-19 at 9.20.17 PM.png
    24.5 KB · Views: 21
Because proxmox does not support cgroupv2 yet. Cgroupv2 is needed to fix swap going all medieval in your CTs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!