ceph 12.1.0-pve2 high ram usage

m1513k

New Member
Jun 13, 2018
7
0
1
33
Hi,

we have two node cluster test environment and we have concerns about ram usage which is about 90% on each node. Each node has 64GB RAM and 5 VM consuming abount 24GB of RAM. What consumes the rest?
 
try 'top' and sort by %MEM
 
Also upgrade to the newest ceph version on PVE5.x, v12.2.5, as there was a bug in ceph, where the account of RAM of a OSD was not right. Besides that, upgrade the cluster to PVE5.2. ;)
 
Most amout of RAM is used by ceph-osd processes so I will update ceph and cluster and check the results. Thank you for help.
 
I have upgrade ceph but in ceph-dash I have th following message "'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'". In addition ceph health gives output HEALTH_WARN no active mgr. Where is the problem and how can I fix it? In addition ceph health gives output HEALTH_WARN no active mgr and when I try to check osd's I had error mon_command failed - this command is obsolete.
 
Last edited:
As the message says, you're parsing the wrong json fields (ceph report).
 
There is nothing to fix, wherever you see this, you see the wrong fields. See the ceph report, for the naming.
 
Hi!

Perhaps you can specify the value of bluestore_cache_size

/etc/pve/ceph.conf:
Code:
[global]
...
         bluestore_cache_size = 536870912

... and restart every OSD. It is possible that the value will have to be chosen for your tasks. The default value is 0, which allows all available memory.
 
Hi!

Perhaps you can specify the value of bluestore_cache_size

/etc/pve/ceph.conf:
Code:
[global]
...
         bluestore_cache_size = 536870912

... and restart every OSD. It is possible that the value will have to be chosen for your tasks. The default value is 0, which allows all available memory.
It's not true.
bluestore_cache_size

Description: The amount of memory BlueStore will use for its cache. If zero, bluestore_cache_size_hdd or bluestore_cache_size_ssd will be used instead
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!