Cannot create dummyfile in LXC running Alpine Linux

cmonty14

Well-Known Member
Mar 4, 2014
343
5
58
Hello!
I have configured a quite small container for a single service:
arch: amd64
cores: 1
hostname: vm102-haproxy.whl.meilocal.net
memory: 32
net0: name=eth0,bridge=vmbr2,gw=10.0.0.1,hwaddr=D6:B6:21:29:10:E6,ip=10.0.0.2/24,type=veth
onboot: 1
ostype: alpine
rootfs: images:102/vm-102-disk-1.raw,size=307M
startup: order=1,up=5
swap: 16
unprivileged: 1


This LXC was running with Alpine Linux 3.6 w/o issues.

However, I want to upgrade to latest Alpine Linux 3.7, but this always fails after 40-50%.
The upgrade process does not fail with the same file, therefore I assumed it is related to the write IO to the local disk.
To verify this I started a dd process to create a dummy file. And this file was not written, instead the process was killed after writing 29.3MB:
vm102-haproxy:~# dd if=/dev/zero of=/dummyfile bs=4M count=50
Killed
vm102-haproxy:~# ls -lh /
total 30052
drwxr-xr-x 2 root root 4.0K Mar 10 09:44 bin
drwxr-xr-x 4 root root 380 Mar 10 10:16 dev
-rw-r--r-- 1 root root 29.3M Mar 10 10:18 dummyfile
drwxr-xr-x 31 root root 4.0K Mar 10 10:16 etc
-rw-r--r-- 1 root root 0 Mar 10 10:16 fastboot
[...]


If you think there's insufficient free space that would be too simple:
vm102-haproxy:~# df -h
Filesystem Size Used Available Use% Mounted on
/dev/loop0 289.3M 211.8M 58.1M 78% /
/dev/loop0 289.3M 211.8M 58.1M 78% /
none 492.0K 0 492.0K 0% /dev
run 7.8G 60.0K 7.8G 0% /run
shm 7.8G 0 7.8G 0% /dev/shm
udev 7.8G 0 7.8G 0% /dev/null
udev 7.8G 0 7.8G 0% /dev/zero
udev 7.8G 0 7.8G 0% /dev/full
udev 7.8G 0 7.8G 0% /dev/urandom
udev 7.8G 0 7.8G 0% /dev/random
udev 7.8G 0 7.8G 0% /dev/tty


I've also resized the disk to 1G after purging the LXC w/o success.

Please advise how to fix this issue.

THX
 
you probably get OOM killed. can you check the logs and/or test with more memory assigned to the container?
 
you probably get OOM killed. can you check the logs and/or test with more memory assigned to the container?

Hi Fabian,
I do believe that this is an out-of-memory issue because after increasing memory to 50MB I could write a file with dd to ~48MB.

But this leads to the next question:
How can I run a small container with this restriction?
I must be able to update/upgrade the baseline OS, in this case Alpine Linux.

THX
 
Hi Fabian,
I do believe that this is an out-of-memory issue because after increasing memory to 50MB I could write a file with dd to ~48MB.

But this leads to the next question:
How can I run a small container with this restriction?
I must be able to update/upgrade the baseline OS, in this case Alpine Linux.

THX

well, you need to provide enough memory to allow the upgrade to go through..
 
well, you need to provide enough memory to allow the upgrade to go through..

ok, this is the obvious workaround.
But this would mean that I have to increase the memory for OS update and after completion decrease it for normal operation.
This does not sound very effective.

And there's another question:
dd is writing bits to a device.
When I select a file in root path, why is this exhausting memory?
 
ok, this is the obvious workaround.
But this would mean that I have to increase the memory for OS update and after completion decrease it for normal operation.
This does not sound very effective.

And there's another question:
dd is writing bits to a device.
When I select a file in root path, why is this exhausting memory?

likely because the memory used for buffering/writing is accounted to the container, which in turn hits the very low limit and triggers the OOM-killer. the kernel log should contain some more information about memory usage at the time when it triggered.
 
likely because the memory used for buffering/writing is accounted to the container, which in turn hits the very low limit and triggers the OOM-killer. the kernel log should contain some more information about memory usage at the time when it triggered.

Well, I will check kernel log and update this thread.
However, I was not expecting that writing a file to persistent storage would exhaust the available memory by caching.
In this case it should be possible to mount persistent storage to the container w/o caching.
 
Well, I will check kernel log and update this thread.
However, I was not expecting that writing a file to persistent storage would exhaust the available memory by caching.
In this case it should be possible to mount persistent storage to the container w/o caching.

it's not necessarily the (file/page) cache - the process which is writing might have the data in its regular memory as well, especially when we're just talking about a few tens of MB
 
it's not necessarily the (file/page) cache - the process which is writing might have the data in its regular memory as well, especially when we're just talking about a few tens of MB

Hm... all your responses imply that LXC is not supposed to run with a microservice and a few MB memory.
But this is exactly the term that everybody points out as the advantage of containers. Or is this only valid for Docker?
 
Hm... all your responses imply that LXC is not supposed to run with a microservice and a few MB memory.
But this is exactly the term that everybody points out as the advantage of containers. Or is this only valid for Docker?

sorry, but if you limit a container to a certain amount of memory, you can of course only run stuff that does not require more memory. if I understand you correctly, the service itself seems to run fine?

if upgrading inside your deployed container takes more resources than you allocated to the container, you either need to allocate more resources (at least for the duration of the upgrade), or upgrade with other means (e.g., AFAICT the usual approach would be to redeploy the container from an updated template instead of running an update inside every deployed instance - but I am not a "microservice" expert).

this has nothing to do with LXC in particular whatsoever.
 
sorry, but if you limit a container to a certain amount of memory, you can of course only run stuff that does not require more memory. if I understand you correctly, the service itself seems to run fine?

if upgrading inside your deployed container takes more resources than you allocated to the container, you either need to allocate more resources (at least for the duration of the upgrade), or upgrade with other means (e.g., AFAICT the usual approach would be to redeploy the container from an updated template instead of running an update inside every deployed instance - but I am not a "microservice" expert).

this has nothing to do with LXC in particular whatsoever.

First: the "microservice" (HAProxy) is running fine w/o issues.
Second: your proposal to redeploy the container from an updated template would imply to reinstall and reconfigure the application HAProxy. In my understanding this is different to Docker where I have different layers that allows me to upgrade the OS w/o changing the application.
 
First: the "microservice" (HAProxy) is running fine w/o issues.
Second: your proposal to redeploy the container from an updated template would imply to reinstall and reconfigure the application HAProxy. In my understanding this is different to Docker where I have different layers that allows me to upgrade the OS w/o changing the application.

or to integrate those steps into your template generation process (which is what Docker does, the stack is just a bit different), or to use some kind of automatic deploy/config management stack to do it for you on top of an updated base template, or ...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!