Currently as all nodes are under load and memory consumption is around 90-95% on each of them.
CEPH cluster details:
* 5 nodes in total, all 5 used for OSD's 3 of them also used as monitors
* All 5 nodes currently have 64G ram
* OSD's 12 disks in total per node - 6x6TB hdd and 6x500G ssd.
* Nodes are running ceph exclusively so no VM's or any other memory consumers.
* PVE v5.3-11
I know that current ceph version in PVE does already support new ram caching for OSD's for up to 4GB per OSD if possible.
This should mean that 48GB is pretty much used up by that OSD caching and 12-14G are left for other processes.
Not sure yet, how ram usage will be affected in this new version when re-balance is started if some OSD or OSD node fails.
PS. Just wanted to know if I have added enough ram and just being paranoid about that 90-95% ram usage per node. Or I should add some more memory?
CEPH cluster details:
* 5 nodes in total, all 5 used for OSD's 3 of them also used as monitors
* All 5 nodes currently have 64G ram
* OSD's 12 disks in total per node - 6x6TB hdd and 6x500G ssd.
* Nodes are running ceph exclusively so no VM's or any other memory consumers.
* PVE v5.3-11
I know that current ceph version in PVE does already support new ram caching for OSD's for up to 4GB per OSD if possible.
This should mean that 48GB is pretty much used up by that OSD caching and 12-14G are left for other processes.
Not sure yet, how ram usage will be affected in this new version when re-balance is started if some OSD or OSD node fails.
PS. Just wanted to know if I have added enough ram and just being paranoid about that 90-95% ram usage per node. Or I should add some more memory?
Last edited: