According to the Ceph docs, if a pool has less than
Examples:
Related thread:
min_size
OSDs available, IO is blocked, that includes writes, but also reads. This seems counter-intuitive to me, does anyone know, why this is the case?Examples:
- Standard 3/2 pool, replicated with size 3, min_size 2: Can fully read and write with 2 OSDs, but with only 1, everything stops, but why should I not be able to read from the remaining OSD?
(One might argue, that there are reads, that could cause metadata writes, so in case this a problem, let's try to avoid that with the next example.) - CephFS metadata and base data pool standard 3/2 replicated, but an additional 4+2 EC data pool on 6 separate OSDs. The EC pool is 6/5, size 6, min_size 5 as suggested in the docs. This seems reasonable, however when shutting down 2 OSDs of the EC pool (e.g. during maintenance of one host), it's completely blocked, despite all metadata changes (I hope) would go only to the replicated pools, which are still fully available.
min_size
temporarily to gain access to the data, but the docs I guess rightfully warn about this. It seems it would be more feasible, if one could put a pool into read-only mode manually, but I've not found any reasonable way to do that (despite PGs apparently can have a read-only state). Any ideas? Thanks!Related thread:
- "Erasure code and min/max in Proxmox": https://forum.proxmox.com/threads/erasure-code-and-min-max-in-proxmox.144121/