Limit Ceph Luminous RAM usage

May 9, 2017
18
0
41
I am trying to limit my osd RAM usage.
Currently my osd's (3) are using ~70% of my RAM (the ram is now completely full and lagging the host).

Is there a way to limit the RAM usage for each osd?
 
We also see this - RAM use of upto 20GB on a host with 4 x 300GB HDDs.

Looking at the documentation above it seems an adjustment is required for us - in Proxmox's deployment of Ceph are we to edit /etc/pve/ceph.conf to include the currently absent cache limit, and then restart the OSDs / Ceph service on each server? (Daft question, but just want to be 100% sure!).

Jon
 
We also see this - RAM use of upto 20GB on a host with 4 x 300GB HDDs.

Looking at the documentation above it seems an adjustment is required for us - in Proxmox's deployment of Ceph are we to edit /etc/pve/ceph.conf to include the currently absent cache limit, and then restart the OSDs / Ceph service on each server? (Daft question, but just want to be 100% sure!).

Jon

if you are on bluestore, yes ;) note that the cache limit is neither a hard limit nor does it represent all the memory the OSD process might use, so expect to see more usage than the configured amount, especially under load.
 
I'm running CEPH and the lastest release from Proxmox and I'm still gettting major memory leak issues. The 12.2.2 release does not fix all the leaks. The OOM killer is killing an OSD at about a rate of one each 5 to 10 hours. I've not been able to find an example of how to reduce the cache to see if this is apart of the issue.