[SOLVED] Proxmox is using swap with lot of RAM available

Discussion in 'Proxmox VE: Installation and configuration' started by batijuank, Mar 7, 2018.

  1. batijuank

    batijuank New Member

    Joined:
    Nov 16, 2017
    Messages:
    27
    Likes Received:
    0
    While watching the backup ( I used vzdump 111 --node asgard --mode stop --compress gzip --remove 0 --storage local and watch -n 1 free -h), I discovered that during the backup used RAM never pass 4GB, while free RAM decreases and buff/cache increases up to 20GB and then system start swapping. I don't know why buff/cache doesn't count as used RAM or what's causing this behavior. But at least now I have an idea of what's causing the swapping. Does anyone knows the cause of this?

    @dietmar @Klaus Steinberger @denos @mbaldini
     
  2. batijuank

    batijuank New Member

    Joined:
    Nov 16, 2017
    Messages:
    27
    Likes Received:
    0
    So, after a lot search on the web I "hack a patch" to solve my problem. According to what I learned, every system has a cached-page feature which it uses to cache read files into unused memory areas. If this files are modified the system mark this areas as dirty and issue a command to write the modifications back into disk. Due to how this feature works, systems does not count this memory areas as used, but as cache/buffer and also as available memory . When applications start to demand memory, the system evicts memory from page cache, however, for some unknown reason, it seems that this does not happens, so I created a scripts that force my system to do so. If someones, has a better solution, finds an error or knows a reason not to execute this script, please, do tell.
     
  3. batijuank

    batijuank New Member

    Joined:
    Nov 16, 2017
    Messages:
    27
    Likes Received:
    0
    The script sums used memory and cache/buffer memory, if this total goes beyond a threshold, it ask the system to drop cache pages. Furthermore, if the system started to swap while there is available memory, it flush the swap back to memory. I know, this is an ugly solution, but the time the system lag is shorter than when I let the system swapping on his own.
     
  4. batijuank

    batijuank New Member

    Joined:
    Nov 16, 2017
    Messages:
    27
    Likes Received:
    0
    Another one, I think this one's better. I also put some comments. Any feedback is welcome.
     

    Attached Files:

  5. batijuank

    batijuank New Member

    Joined:
    Nov 16, 2017
    Messages:
    27
    Likes Received:
    0
    Another update
     

    Attached Files:

  6. batijuank

    batijuank New Member

    Joined:
    Nov 16, 2017
    Messages:
    27
    Likes Received:
    0
    I finally solved the problem, I was using writeback as KVM HDD cache. According to this guide, when you use this type of cache:

    This mode causes qemu-kvm to interact with the disk image file or block device with neither O_DSYNC nor O_DIRECT semantics, so the host page cache is used and writes are reported to the guest as completed when placed in the host page cache, and the normal page cache management will handle commitment to the storage device. Additionally, the guest’s virtual storage adapter is informed of the writeback cache, so the guest would be expected to send down flush commands as needed to manage data integrity.
    Analogous to a raid controller with RAM cache.

    I highlighted the important part in bold, due to this configuration, I was spending RAM on ZFS ARC and on buffer/cache in my node, every time I started a backup or any I/O demanding operation I ended up consuming up to 36 GB on memory (20 on buffer/cache plus 16 on ARC), even if I set ARC to minimum (64MB) I wouldn’t be able to serve that demand since Proxmox plus KVM RAM demand is 8GB. Anyway, right now I set it all my KVM HDD cache to No Cache. 3 days has passed without an RAM issue. I want to thank you both, for your help. Now I can get back to setup my Nethserver KMVs.

    I'm also experimenting similar issues when coping/creating files over nfs/smb.
     
  7. tiberius

    tiberius New Member

    Joined:
    Nov 5, 2016
    Messages:
    9
    Likes Received:
    0
    Hello folks,

    I am currently running two proxmox nodes in a non cluster env (ext4 default.).
    Each node is hosting just two resp. 3 kvm guests with ballooning and qemu enabled and I do observe massively proxmox swaping since 4.4.

    I still wonder why swapping occurs because my system has 64gb ram and my node nr01 with 2 vhosts just got 32 and 4gb assigned.

    Cheers
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice