PVE 5.4-3 CEPH High RAM Usage?

HE_Cole

Member
Oct 25, 2018
45
1
6
32
Miami, FL
Hi Everyone!

I have a small 3 node PVE cluster all running 5.4-3. My RAM usage is has been growing on my Hosts but i have not added any new data or VM's to the cluster.

Each node has 8x 2TB Drives
and 2x 120GB SSDs WAL/DB and OS
64GB RAM per Host.

Most of the Usage is

cpeh-osd at 6% "PER OSD"

1. Is this expected ?

2. Can i limit this?

3. Is it safe to Limit RAM usage?

Here is a TOP output

Code:
 2344 root      rt   0  196916  71956  51484 S   1.7  0.1   1500:16 corosync                                                                           
   8191 ceph      20   0 4948724 4.034g  29472 S   1.3  6.4 653:05.81 ceph-osd                                                                           
2142706 root      20   0 9939.2m 7.941g  27216 S   1.0 12.6   1458:08 kvm                                                                                 
   9636 ceph      20   0 5025480 4.107g  29712 S   0.7  6.5 759:34.93 ceph-osd                                                                           
  12900 ceph      20   0 4869640 3.959g  29728 S   0.7  6.3 610:09.46 ceph-osd                                                                           
 121234 root      20   0 3661284 1.972g  26828 S   0.7  3.1   1073:59 kvm                                                                                 
   2358 ceph      20   0  711488 305584  23100 S   0.3  0.5 633:49.86 ceph-mon                                                                           
  11314 ceph      20   0 1283448 460700  29564 S   0.3  0.7 260:27.39 ceph-osd                                                                           
  14594 ceph      20   0 4953164 4.039g  29820 S   0.3  6.4 617:15.15 ceph-osd                                                                           
  15975 ceph      20   0 4800588 3.893g  29832 S   0.3  6.2 516:20.06 ceph-osd                                                                           
2452372 root      20   0   45208   4108   3128 R   0.3  0.0   0:00.56 top                                                                                 
3738213 root      20   0  780660  63176  45640 S   0.3  0.1  72:32.12 pmxcfs

Let me know thanks
 
  • Like
Reactions: Tmanok
https://ceph.com/releases/v12-2-10-luminous-released/

The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. BlueStore will expand
and contract its cache to attempt to stay within this
limit. Users upgrading should note this is a higher default
than the previous bluestore_cache_size of 1GB, so OSDs using
BlueStore will use more memory by default.
 
  • Like
Reactions: HE_Cole
It's

ceph version 12.2.11 (c96e82ac735a75ae99d4847983711e1f2dbf12e5) luminous (stable)



If that's the case can i lower it?

I use ceph bluestore OSDs... if you do also, review this reference doc: http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/


TLDR: set the maximum memory usage per each OSD in ceph.conf. I have done this without any issues.
Code:
[global]
          other settings....
          osd_memory_target = 2147483648

The above example sets the memory per OSD to ~2 GB instead of the default 4 GB, therefore saving memory.
 
hi,

I have the very same problem, but over the time. I had set the osd_memory_target, but over a month it takes more memory:

From my ceph-user mailinglist:

What we have:

* Ceph version 12.2.11
* 5 x 512MB Samsung 850 Evo
* 5 x 1TB WD Red (5.4k)
* OS Debian Stretch ( Proxmox VE 5.x )
* 2 x CPU CPU E5-2620 v4
* Memory 64GB DDR4

I've added to ceph.conf

...

[osd]
osd memory target = 3221225472
...

Which is active:


===================
# ceph daemon osd.31 config show | grep memory_target
"osd_memory_target": "3221225472",
===================

Problem is, that the OSD processes eating my memory:

==============
# free -h
total used free shared buff/cache available
Mem: 62G 52G 7.8G 693M 2.2G 50G
Swap: 8.0G 5.8M 8.0G
==============

As example osd.31, which is a HDD (WD Red)


==============
# ceph daemon osd.31 dump_mempools

...

"bluestore_alloc": {
"items": 40379056,
"bytes": 40379056
},
"bluestore_cache_data": {
"items": 1613,
"bytes": 130048000
},
"bluestore_cache_onode": {
"items": 64888,
"bytes": 43604736
},
"bluestore_cache_other": {
"items": 7043426,
"bytes": 209450352
},
...
"total": {
"items": 48360478,
"bytes": 633918931
}
=============


=============
# ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -30
6.5 1.8 5040944 6594 /usr/bin/ceph-osd -f --cluster ceph --id 31 --setuser ceph --setgroup ceph
6.4 2.4 5053492 6819 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
6.4 2.3 5044144 5454 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
6.2 1.9 4927248 6082 /usr/bin/ceph-osd -f --cluster ceph --id 5 --setuser ceph --setgroup ceph
6.1 2.2 4839988 7684 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
6.1 2.1 4876572 8155 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
5.9 1.3 4652608 5760 /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser ceph --setgroup ceph
5.8 1.9 4699092 8374 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
5.8 1.4 4562480 5623 /usr/bin/ceph-osd -f --cluster ceph --id 30 --setuser ceph --setgroup ceph
5.7 1.3 4491624 7268 /usr/bin/ceph-osd -f --cluster ceph --id 34 --setuser ceph --setgroup ceph
5.5 1.2 4430164 6201 /usr/bin/ceph-osd -f --cluster ceph --id 33 --setuser ceph --setgroup ceph
5.4 1.4 4319480 6405 /usr/bin/ceph-osd -f --cluster ceph --id 29 --setuser ceph --setgroup ceph
1.0 0.8 1094500 4749 /usr/bin/ceph-mon -f --cluster ceph --id fc-r02-ceph-osd-01 --setuser ceph --setgroup ceph
0.2 4.8 948764 4803 /usr/bin/ceph-mgr -f --cluster ceph --id fc-r02-ceph-osd-01 --setuser ceph --setgroup ceph
=================

After a reboot, the nodes uses round about 30GB, but over a month its again over 50GB and growing.

The thread starts here: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034530.html

Its getting very annoying :)

I'm pretty sure, that the problem starts with 12.2.11-pve or 12.2.10-pve.
 
I have rebootet the Nodes after setting this, so the OSD is getting the new values. Did you reboot the system too? Otherwise you could try to restart the OSD and check if this helps.

It seems the Daemon do not unallocate the already allocated RAM.
 
12.2.10-pve.
Yes, it cames with 12.2.10 (https://ceph.com/releases/v12-2-10-luminous-released/):

The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. BlueStore will expand
and contract its cache to attempt to stay within this
limit. Users upgrading should note this is a higher default
than the previous bluestore_cache_size of 1GB, so OSDs using
BlueStore will use more memory by default.


I will give a quick overview about my values and Hardware:
* Ceph version 12.2.12
* 4 x 1 TB Western Digital Blue
* OS Debian Stretch ( Proxmox VE 5.4-5 )
* 2 x Intel X5650
* Memory 144GB DDR3

ceph.conf:
...
[osd]
osd memory target = 1932735283
...

OSD Config:
===================
# ceph daemon osd.15 config show | grep memory_target
"osd_memory_target": "1932735283",
===================

OSD Memory Usage:
==============
# ceph daemon osd.15 dump_mempools

...
"bluestore_alloc": {
"items": 18386672,
"bytes": 18386672
},
"bluestore_cache_data": {
"items": 1223,
"bytes": 26435584
},
"bluestore_cache_onode": {
"items": 6888,
"bytes": 4628736
},
"bluestore_cache_other": {
"items": 1842765,
"bytes": 66594727
},
...
"total": {
"items": 20753339,
"bytes": 238866524
}
=============

OSD Processes:
=============
# ps -eo pmem,pcpu,vsize,pid,cmd | grep -E "ceph-osd|ceph-mgr|ceph-mon" | sort -k 1 -nr | head -30
1.4 13.6 2930812 4388 /usr/bin/ceph-osd -f --cluster ceph --id 15 --setuser ceph --setgroup ceph
1.3 9.8 2814848 3965 /usr/bin/ceph-osd -f --cluster ceph --id 7 --setuser ceph --setgroup ceph
1.3 8.8 2808152 4138 /usr/bin/ceph-osd -f --cluster ceph --id 11 --setuser ceph --setgroup ceph
1.3 11.0 2784524 4658 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
=================

It seems it is not working too in my Setup. I use the stupid Allocator too, currently.
# ceph daemon osd.15 config show | grep allocator
"bluefs_allocator": "stupid",
"bluestore_allocator": "stupid",
"bluestore_bitmapallocator_blocks_per_zone": "1024",
"bluestore_bitmapallocator_span_size": "1024",

Do you have some news if it is working now with the bitmap instead of stupid?
 
hi,

because my Icinga2 sends a notification: also with 5.4-13 and Ceph 12.2.12-pve1 this problem still exists. I have to reboot (I dislike to just restart the OSD's) after a few weeks.

cu denny
with bitmap allocator ? not sure it's enable by default on luminous now (like for nautilus), but on 12.2.12 you can enable it without problem.
 
hi,

I've enabled it a long time ago ( I think one version before it was supported), but it didn't changed anything. I hope it will help, to remove all the WD red disks.

As example for a SSD

Code:
ceph daemon osd.5 config show | grep allocator
"bluefs_allocator": "bitmap",
"bluestore_allocator": "bitmap",
"bluestore_bitmapallocator_blocks_per_zone": "1024",
"bluestore_bitmapallocator_span_size": "1024",


cu denny
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!