[SOLVED] PVE Container Memory Management

Jay L

Active Member
Jan 27, 2018
19
3
43
54
Hi,

I am new to Proxmox and am using it on a small homelab system. It is running on a fanless mini PC with only 8GB of RAM. I am a Linux user and so I am currently hosting virtual Linux environments running in containers.

So I am trying to understand how memory is managed. So let's assume I have four containers and assign 3GB of memory to each. This would equate to 12GB total memory usage which exceeds the 8GB physically available. Now my machines are inactive 99% of time and so it is fair to assume that all virtual devices will not be hammering the system at the same time. This brings me to my questions:
  1. Is over provisioning RAM in Proxmox containers a bad idea? If so why?
  2. If I do over provision, will RAM be dynamically allocated? (e.g. physical RAM is taken from inactive machines and consumed by active ones as needed?)
  3. What is best practices for RAM allocation in bursty environments?
Just to be clear, I am not having issues but want to be sure that I understand best practices.

Thank you in advance!
 
In general, over-provisioning is no problem at all, as long as everything fits into the memory and the swap. If you use more RAM than you physically have, not so frequently used stuff is swapped out to disk. Unless this is really infrequently used, it poses no problem. Yet if you cross the barrier, you will hit a tremendous performance bottleneck. You should monitor your system (load, memory) and probably also swapin/swapout to get a glimpse what's going on. If you hit your performance-impact-point, you should consider buying more RAM for the machine.

This is true for LX(C) containers as well as for KVM VMs. In the latter case, you have kernel samepage merging (KSM), which will reduce the overall ram usage by deduplicating pages. As far as I understood it (so I could be wrong about this), KSM only works in the same memory namespace so that it does not work for LXC.
 
Hi,

I do not know what is the best solution, but I can tell you what I am doing on VMs. I do not prefer to use more RAM then I have to any VM. By default I use a small swap partition who is = allocation RAM for VM. This swap is fast (SSD). And I also have a script who will chech when the fast swap is at 80 % use. When this limit is happen, then I activate a very slow swap (swap on /path-to-a-file) who is located inside the same VM.
This work for me, so I can not say that is must work for some one else.
But like our guru @LnxBil said, insufficient RAM on long term period is not a smart ideea. And RAM price is not so a big problem even for a low budget .... In my case I go on the slow swap once in 3-4 month.
 
I do not know what is the best solution, but I can tell you what I am doing on VMs. I do not prefer to use more RAM then I have to any VM. By default I use a small swap partition who is = allocation RAM for VM. This swap is fast (SSD). And I also have a script who will chech when the fast swap is at 80 % use. When this limit is happen, then I activate a very slow swap (swap on /path-to-a-file) who is located inside the same VM.

Good idea.

I often use a similar approach: zram, which is swap compressed in RAM as first tier. With this technique, you can have - as silly as it sounds - swap in RAM, which is standard on a lot of ARM- and Linux-based boxes to increase their performance. In VMs itself, I always use only zram. This is the only way to have compressed RAM until it is generally available in the kernel.
 
Good idea.

I often use a similar approach: zram, which is swap compressed in RAM as first tier. With this technique, you can have - as silly as it sounds - swap in RAM, which is standard on a lot of ARM- and Linux-based boxes to increase their performance. In VMs itself, I always use only zram. This is the only way to have compressed RAM until it is generally available in the kernel.

Hello Bill,
From what I have experienced , zram does not work in a LXC . Only pve host or KVM.
Do you agree?
 
Sure, but you can enable it on the host and transparently can use it inside of your container.
We had an issue with mem on lxc. from syslog:
Code:
Feb 15 13:42:53 etherpad init-zram-swapping[119]: libkmod: ERROR ../libkmod/libkmod.c:586 kmod_search_moddep: could not open moddep     
file '/lib/modules/4.13.13-5-pve/modules.dep.bin'                                                                                       
Feb 15 13:42:53 etherpad init-zram-swapping[119]: modinfo: ERROR: Module alias zram not found.                                         
Feb 15 13:42:53 etherpad init-zram-swapping[119]: libkmod: ERROR ../libkmod/libkmod.c:586 kmod_search_moddep: could not open moddep     
file '/lib/modules/4.13.13-5-pve/modules.dep.bin'                                                                                       
Feb 15 13:42:53 etherpad init-zram-swapping[119]: modinfo: ERROR: Module alias zram not found.

I keep around a zram .deb to install on pve. it was an old pre systemd model. Rebuilding a deb using https://github.com/Nefelim4ag/systemd-swap code solved our issue.

Code:
git clone https://github.com/Nefelim4ag/systemd-swap.git
apt install make
./systemd-swap/package.sh debian
cd systemd-swap/
dpkg -i systemd-swap_4.0.1_any.deb
 
Hi,

I am new to Proxmox and am using it on a small homelab system. It is running on a fanless mini PC with only 8GB of RAM. I am a Linux user and so I am currently hosting virtual Linux environments running in containers.

So I am trying to understand how memory is managed. So let's assume I have four containers and assign 3GB of memory to each. This would equate to 12GB total memory usage which exceeds the 8GB physically available. Now my machines are inactive 99% of time and so it is fair to assume that all virtual devices will not be hammering the system at the same time. This brings me to my questions:
  1. Is over provisioning RAM in Proxmox containers a bad idea? If so why?
  2. If I do over provision, will RAM be dynamically allocated? (e.g. physical RAM is taken from inactive machines and consumed by active ones as needed?)
  3. What is best practices for RAM allocation in bursty environments?
Just to be clear, I am not having issues but want to be sure that I understand best practices.

Thank you in advance!

Hi Jay,

Interested to know which fanless PC you are using?
 
Hi,

So I have a couple. I am currently running Proxmox on this one with 8/256, and it works really well. In my experience the real limiting factor is the RAM as I am barely touching the storage. I am running Linux Containers (currently running 6) which are very efficient from a storage and CPU usage standpoint. I have had no issues at all. Anecdotally, it seems that LXC is very memory efficient and I have no issues oversubscribing memory especially since none of my VMs are memory intensive.

I will be installing Proxmox on this one as well:

https://www.amazon.com/gp/product/B01JOR1IN2/ref=oh_aui_detailpage_o09_s00?ie=UTF8&psc=1

Your mileage will vary depending on your use case, but for me with multiple light use Linux VMs, it works perfectly.

On a side note, I highly recommend using InfluxDB and Grafana to track metrics. It is a perfect way to track utilization, and it runs in a container.

Edits: Fixed the PC link and clarified the specs