Daily backup tasks spike Linux RAM cache, never gets freed up, pushes into swap

I just learned about the smem utility, which you can install on your pve host and check which processes do use how much swap. Using that utility, I found out, that one of my guests - a steam /arkserver used approx. 3 GB of swap, which obviously the pve host has to provide. Maybe this utility will help you as well in chasing down the swap usage on your pve.
 
Last edited:
Alright, so I want to clarify, I can totally appreciate the scepticism around what I'm saying here. I agree that it is not quite matching expectations. I run a cluster where two of the nodes exhibit this behaviour, and one doesn't. I've tried to find an explanation, and that's part of why I'm posting here in the first place, because I can't yet explain it. All three nodes do backup tasks daily, yet two show this problematic behaviour and one does not.

As for monitoring insights, here is the biggest node that is exhibiting the issue : https://imgur.com/a/wQSOU7I

I want to add that all the VMs that run on that host have a static amount of RAM set. There is no ballooning going on, and I have not modified the proxmox install to run anything beyond snmpd on each node (for monitoring obviously). But snmpd is even running on the node that does not exhibit this behaviour.

This is the total RAM usage, I was unable to separate out caching. Swapping starts when the RAM is full, naturally, so I didn't include the swap graph (I only have a graph for swap usage in bytes, not swap in/out insights) as including that graph is redundant as we can safely assume swapping happens at 100% RAM usage for an extended period of time (as the image shows, several days).

As you can see, each day the backup happens, the RAM usage jumps up a good chunk, and the next day none of that is freed up. So the next backup task happens and it jumps up again, nothing getting freed.

The big drop on the right side of the image is me telling the system to flush caches and swap manually. I have not observed it flushing either automatically.

As I mentioned earlier, up until today the only reliable way I have found to address this is to periodically flush the cache and swap manually. However there is a lot of criticism about this method online, and I've been trying to find a better way to address this issue. I'd rather solve the root cause, but backups seem to be the cause in this case and I am not sure how I can adjust backups to address this issue.

Right now that cluster node has 12 VMs on it, and the RAM allocated for them is ~52GB out of the 128GB of RAM in the physical host node itself.

Also, ignore the date filter in the top of the image, it didn't adjust that range as I zoomed in, so the date filter at the top is not accurate. The days along the bottom of the image are accurate.
 
Hi @BloddyIron,

As you can see, each day the backup happens, the RAM usage jumps up a good chunk, and the next day none of that is freed up. So the next backup task happens and it jumps up again, nothing getting freed.

As we discussed already, cache is never flushed and that is a good thing. A server with 100% usage of RAM is the desired case and normal. This is a misconception that comes from the Windows world, where cache is always regarded as free in contrast to Linux where is it counted as used. This misconception is so widely spread, that it has its own website.

The big drop on the right side of the image is me telling the system to flush caches and swap manually. I have not observed it flushing either automatically.

As I mentioned earlier, up until today the only reliable way I have found to address this is to periodically flush the cache and swap manually. However there is a lot of criticism about this method online, and I've been trying to find a better way to address this issue.

I've never flushed caches manually in my life and as you already pointed out, admins go nuts when someone talks about it and so would i. It's useless.
It seems to solve the problem you see, but that does not mean that it actually does.

Thank you for the metric, but they are not sufficient. You need to have a graph that monitors the actual memory usage divided in used, cached and free. Again, on all my servers, if I only monitor the used value, I always have nearly 100%, that's normal. I can recommend telegraf for acquiring metrics.

Have you tried to play around with the backup settings? If you e.g. use a parallelized compression algorithm, you often use more RAM. Please also check the swapiness value on all nodes (cat /proc/sys/vm/swappiness). What about the backup volume on your nodes? As you wrote, you experience this problem not on all nodes, so is the backup volume different?
 
My issue isn't inherently with the caching, it is with swapping. Swapping on these nodes doesn't happen until the RAM is full (with cache and application data, etc). This is why I flush the cache, as it "buys more time" before swapping occurs. If this caching happened without any swapping, I really wouldn't care.

I'd also like to add, if the first backup wave adds a certain amount of data to RAM in cache, why does it need to add more data to RAM the next day that looks nearly duplicate? It doesn't quite add up to me.

In regards to swappiness, all of my active nodes have a value of 60, so I would not say that inherently explains this. I've also adjusted this value for other systems I've observed swapping on, and it only reduces the rate, does not eliminate it.

In regards to the backup task. It is set to use GZIP, and I'm not seeing parallel settings, unsure what you're referring to there. The backup storage is on an NFS mount from a NAS, not local. All nodes backup to this NFS mount.

In regards to getting more metrics, I'm not sure when I'll be able to get that.
 
Thanks to my alerting failing to E-Mail me (unrelated issue) I have been able to observe multiple nodes continue to follow this growth pattern without stopping. The swap grows every time a backup happens, there is nothing else going on, and this happens every day (as my backups are daily) once the cache has filled all available RAM.

I really don't think I need to provide more evidence, as the pattern is obvious. And at this point, I really don't think I'm being taken seriously here or elsewhere. I've even dropped swappiness to 0 on the nodes with the issue, to no change of behaviour.

As such, I'm just going to institute a daily cron at like 5 am to flush cache and swap, since I have yet to actually be shown any real solution otherwise, or really feel like I'm being taken seriously on this matter. This is a problem, because I will eventually run out of swap, and that's in addition to the already outlined issues of running in swap when it is completely unnecessary.

I hope that the backup process is reviewed, because it is clearly the only thing that is causing this. Again, there is no other scheduled task running on these nodes at these identical daily times that could be causing this. IT IS THE BACKUPS. And I'm most certainly not turning them off.

I can appreciate the challenge of reproducibility of this, but I hope those reading this can appreciate that I need to take action for this, as I really feel I have no other option at this point.
 
Do you have evidence of

* the swap will eventually run out of space ? ( if yes then there is a memory leak somewhere. This must be investigated)
* swap in activity ? ( if there is no swap in, it means the swapped out code/data has not been needed since it has been moved out of ram)

Swapping out unneeded code and data is not a problem. ( it makes room or something else)
Swapping in back stuff from the swap file IS INDEED BAD.

Assuming swap space not running out of space and no swap in occurs, the only "bad" thing in your system IS YOUR CRON JOB
as it just reloads from swap "unused stuff" moved out of ram to swap in order to make room for something "more useful"


Please monitor the swap in counter. this is "THE ONE"

( vmstat command might be your friend)
 
Do you really want me to run my system to 0% swap and 0% RAM to prove this is a problem? Because I'm not going to do that. Both of the problematic systems have continued to use more and more swap each day, one of which is at 50% swap usage.

The evidence I've presented is conclusive enough to clearly identify the only impactful task is backups and that the growth is continual and _NEVER_ shrinks. No RAM is freed up (from caching) and no swap is freed up, ever. So, if you still somehow don't believe this won't go away, then I'm done. I've said enough.
 
cat /proc/$PID/status |grep VmSwap will give you the amount of swap owned by the process with pid = PID

Knowing the "swap eater" process might be useful to help you ...
 
Hi!

I haven't seen any solution to this problem in this thread ( it's quite old as well ), and I'm not sure if you managed to solve it after all, but I would love to share our solution for this exact same issue.
We had a completely new, relatively unconfigured Backup system and all we had to do is exclude the Syslog & Lastlog under /var.
After this change was implemented the buff/cache wasn't spiking and the swap was not used.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!