Huge memory usage _after_ backup

Scoffer

New Member
Jun 15, 2020
7
0
1
44
Hi everybody. Had to finally register to the forum when I ran into problem I cannot seem to solve. Okay upgrading to Proxmox VE 6.2 (currently 6.1-7) might help, but I would first like to know what's causing the problem since upgrades are a bit of hassle always.

I have somewhat less than a hundred VMs and one of them is a web host VM with decent amount of traffic, small files etc.
I run VM backups once per week. Some of VM's on saturday night, rest sunday night. Problem occurs only with this one VM.
VM has 8GB ram, but after backup the VM eats almost 50GB ram (and I think it still keeps growing). And it doesn't return to normal usage after backup has been completed.

Backup location is locally mounted Samba. All VM's are backed up to that same samba. Oh and no ZFS in use.
I had never this problem before but then again, I changed datacenter some times ago and upgraded Proxmox version and changed server infrastructure some. Earlier I had NFS location for backups but I had to change to Samba because NFS backup jammed whole nodes after interruption while backing up.

Long story short: VM eats shitload of RAM after backup and doesn't release it causing server to eventually run out of memory.

Any ideas people? :oops:
 

Attachments

  • top.png
    top.png
    30.7 KB · Views: 56
  • vm_usage.png
    vm_usage.png
    15.4 KB · Views: 54
  • server_usage.png
    server_usage.png
    17.1 KB · Views: 50
Hi,

During the backup the PMX host wil cache all files from this VM until the RAM is full. If any other process will need more RAM then some cached files will be evicted from RAM... So I do not be very concerned about this highly usage of PMX RAM.
If you do not encounter any other problem, then do not take this effect as a problem.

Good luck / Bafta !
 
During the backup the PMX host wil cache all files from this VM until the RAM is full. If any other process will need more RAM then some cached files will be evicted from RAM... So I do not be very concerned about this highly usage of PMX RAM.
If you do not encounter any other problem, then do not take this effect as a problem.

Thanks guletz :)
Using RAM for caching is just fine. But this used ram doesn't return to be usable. It's not used as disk cache, it's allocated directly to that VM KVM process and it just keeps growing every time backup happens until it uses all server memory and the node crashes.

I've now done two things to see if it makes difference.
1) I disabled memory ballooning (I doubt it will affect anything since it works inside the OS and most of that ram is allocated to the KVM process, not for the OS itself (VM OS has 8GB and process takes 30-60GB).
2) I installed qemu guest agent to the VM hoping it managed memory smarted while backing up.
 
It can, and it's also useless for me since I never overallocate ram. But though in my case ballooning shouldn't really do anything because nodes always has much more ram than what VMs need.
 
No, not really. I have tried with and without memory ballooning, with and without qemu agent and who knows what. Now I have two differences and will see soon if they helped. Unfortunately if they do, I don't know which one is the answer.

1) I upgraded to VE 6.2-11
2) I changed disk from Virtio to SCSI Virtio

Oh and I also changed from 64GB ram host to 128GB ram host but that cannot affect since this appeared even with only this one 8GB ram VM running.
 
Thanks - I'm doing some testing with the settings myself. I'll make sure to update if/when I find something. Interestingly, this only started on a new node with ZFS where the drives in the other node were formatted with LVM.

Thank you for the response.

Update: Turning off ballooning did not help... after a backup memory on the node/host is consumed but doesn't seem to be allocated to the VMs. Linux tools are not showing what process to which the memory is allocated.
 
After some Googling I think I found the issue. It has to do with the ZFS file system cache. This page shows how to change it: https://www.solaris-cookbook.eu/lin...untu-centos-zfs-on-linux-zfs-limit-arc-cache/

Notice the comments. Since I'm booting from the ZFS formatted drive I had to run update-initramfs -v -u.

After some initial testing this seems to be working as expected. From what I've read ZFS is supposed to return memory when needed, so this may not be necessary.

Thanks,

Scott


Update: After nightly backups (where I typically saw the memory usage increase),memory is steady so that was indeed the issue.
 
Last edited:
Yes ZFS can be the cause. But unfortunately not in my case, I don't use ZFS. I use(d) drbd with EXT4 and now started using Ceph but nowhere ZFS.

Though if I have understood correctly, ZFS should release all that used memory eventually, it's only for processing the data.