this is my test cluster:
node A: n.3 filesystem 1TB OSDs
node B: n.2 filesystem 1TB OSDs, n.1 bluestore 1TB OSDs,
node C: n.6 bluestore 300GB OSDs
I noticed that bluestore OSDs take 3.5GB of RAM each, while the fileystem ones take 0.7 GB each.
Following this thread, I added this to ceph.conf:
Now everything is ok, but is this the right thing to do? Or bluestore OSDs take as much RAM as is available, but releases it if needed?
node A: n.3 filesystem 1TB OSDs
node B: n.2 filesystem 1TB OSDs, n.1 bluestore 1TB OSDs,
node C: n.6 bluestore 300GB OSDs
I noticed that bluestore OSDs take 3.5GB of RAM each, while the fileystem ones take 0.7 GB each.
Following this thread, I added this to ceph.conf:
Code:
[osd]
bluestore cache size = 1G
Now everything is ok, but is this the right thing to do? Or bluestore OSDs take as much RAM as is available, but releases it if needed?