I'm having problems limiting memory in an LXC container and would appreciate some help please.
I have created a Centos 7 Container with the following settings:
I'm using it purely to run the AWS CLI so that I can backup the content of /backup2/pm2 to S3.
The problem I'm seeing is that when I use the "aws s3 cp" or similar command to copy a file or directory, the Node's swap (max 8GB) usage starts to go from very little used to 100% used over a period of around 10 minutes during the S3 upload process.
In the meantime, the Node's physical memory usage does not change - there is plenty of headroom and it is not being exhausted or anything like that.
Within the Container, "top" or "ps" shows the aws process using maybe 90% CPU and some reasonable amount of memory - typically 20%.
And yet it is somehow consuming swap on the Node like it has no limit on it.
Is there some other memory parameter that I should be setting to limit the Container's impact on the Node? It was my impression that unlike KVM with balooning memory, a Container should not exceed the memory limit set or the swap set in the config file. But this doesn't seem to be the case, at least to this newbie's eyes.
I know "swap" in Linux is more than just "disk based memory". I know it is far more complex. But even so, I can't understand why a Container with a 512M limit on swap could cause the Node's swap to get completely used.
Suggestions, pointers, explanations etc would be appreciated!
I have created a Centos 7 Container with the following settings:
Code:
arch: amd64
cores: 2
cpuunits: 10
hostname: [redacted]
memory: 512
mp0: /backup2/pm2/dump,mp=/mnt/pm2dump,ro=1
net0: name=[redacted]
ostype: centos
rootfs: backup2:555/vm-555-disk-0.raw,size=8G
swap: 512
unprivileged: 1
I'm using it purely to run the AWS CLI so that I can backup the content of /backup2/pm2 to S3.
The problem I'm seeing is that when I use the "aws s3 cp" or similar command to copy a file or directory, the Node's swap (max 8GB) usage starts to go from very little used to 100% used over a period of around 10 minutes during the S3 upload process.
In the meantime, the Node's physical memory usage does not change - there is plenty of headroom and it is not being exhausted or anything like that.
Within the Container, "top" or "ps" shows the aws process using maybe 90% CPU and some reasonable amount of memory - typically 20%.
And yet it is somehow consuming swap on the Node like it has no limit on it.
Is there some other memory parameter that I should be setting to limit the Container's impact on the Node? It was my impression that unlike KVM with balooning memory, a Container should not exceed the memory limit set or the swap set in the config file. But this doesn't seem to be the case, at least to this newbie's eyes.
I know "swap" in Linux is more than just "disk based memory". I know it is far more complex. But even so, I can't understand why a Container with a 512M limit on swap could cause the Node's swap to get completely used.
Suggestions, pointers, explanations etc would be appreciated!