Search results

  1. J

    Make ceph resilient to multi node failure

    Thank you @fabian, that is a very interesting read. Didn't hear of that before and will consider using stretch mode for my setup.
  2. J

    Make ceph resilient to multi node failure

    Ah, i think i didn't understand the osd_pool_default_min_size parameter correctly. Allright, so if less than this number of copies are available it will go to read only. 4/2 would fix that, but also make my useable space a little lower. 3/1 seems to not be recommended because it can cause data...
  3. J

    Make ceph resilient to multi node failure

    Hello, i have a 4+1 node Proxmox Setup. 4 full nodes with one node just for quorum purposes (no osds in this node and also no vms running on it). 2 nodes are located in one location, 2 in another and the quorum in a third. The reasoning is that i wanted this setup to withstand the failure of a...