Ceph osd_pool_default_size vs chooseleaf firstn 0

Stalfos

New Member
Sep 28, 2023
2
0
1
Hi,

I inherited a Proxmox cluster running Ceph and am trying to understand how it's configured.

When looking at the CRUSH map I noticed this rule:

Code:
id 1
type replicated
step take default class nvme
step chooseleaf firstn 0 type host
step emit

The manual has this to say about firstn: "If {num} == 0, choose pool-num-replicas buckets (as many buckets as are available)", which in this case means all data is written to all hosts, right?

In the ceph.conf file I have this: osd_pool_default_size = 3, which, according to that manual, means that each object should be written to 3 replicas.

Both seem to try to decide how many places each object is written. Am I misunderstanding something? If not, which one takes precedence?