I noticed an issue where I was experiencing a lot of OOM errors within my LXC containers. After updating to Deb13/PVE9.x I found some of my LXC containers at 99% CPU and 99% MEM usage.
Debian 13 has, by default, moved /tmp to a tmpfs.
It incorrectly pulls the amount of memory tmpfs has available as the host memory capacity, in my case 64GB; tmpfs defaults to size=50%. Meaning the LXC thinks it can use 50% of available memory for /tmp.
If I set the container memory limit to 1GB, if /tmp directly tries to store more than 1GB, it's not allowed, and the CPU spikes to 99% and memory usage is stuck at 99%.
"You can return to /tmp being a regular directory by running `systemctl mask tmp.mount` as root and rebooting."
https://www.debian.org/releases/tri...-files-directory-tmp-is-now-stored-in-a-tmpfs
https://manpages.debian.org/trixie/manpages/tmpfs.5.en.html
Please let me know if you think I've missed something.
UPDATE:
https://bugzilla.proxmox.com/show_bug.cgi?id=6167
Also found an older post where someone was experiencing issues with tmpfs allocating node/host kernel RAM size to the LXC rather than the LXC memory limit.
This issue has caused a number of memory limit problems in my LXC containers.
Debian 13 has, by default, moved /tmp to a tmpfs.
It incorrectly pulls the amount of memory tmpfs has available as the host memory capacity, in my case 64GB; tmpfs defaults to size=50%. Meaning the LXC thinks it can use 50% of available memory for /tmp.
If I set the container memory limit to 1GB, if /tmp directly tries to store more than 1GB, it's not allowed, and the CPU spikes to 99% and memory usage is stuck at 99%.
"You can return to /tmp being a regular directory by running `systemctl mask tmp.mount` as root and rebooting."
https://www.debian.org/releases/tri...-files-directory-tmp-is-now-stored-in-a-tmpfs
https://manpages.debian.org/trixie/manpages/tmpfs.5.en.html
Please let me know if you think I've missed something.
UPDATE:
Hello All!
I have been experiencing some LXC with small memory allocation stops, randomly dies silently without any trace of cause. Watching over time, I found that the most probably reason is out of memory. Allocating more memory allows it to run longer, but it fails anyway. When I try to find out where the memory lost, I found that "shared" grows and grows until reach OOM condition.
The result my investigation is very courious, follow the sequence:
1. When container starts, it has tmpfs mount at
I have been experiencing some LXC with small memory allocation stops, randomly dies silently without any trace of cause. Watching over time, I found that the most probably reason is out of memory. Allocating more memory allows it to run longer, but it fails anyway. When I try to find out where the memory lost, I found that "shared" grows and grows until reach OOM condition.
The result my investigation is very courious, follow the sequence:
1. When container starts, it has tmpfs mount at
/run and the size of tmpfs is set to a half of physical server physical memory...- Alexey Pavlyuts
- journal lxc tmpfs
- Replies: 3
- Forum: Proxmox VE: Installation and configuration
Also found an older post where someone was experiencing issues with tmpfs allocating node/host kernel RAM size to the LXC rather than the LXC memory limit.
This issue has caused a number of memory limit problems in my LXC containers.
Last edited: