erasure coded

  1. P

    CEPH Erasure Coded Configuration: Review/Confirmation

    First, let me contextualize our set-up: We have a 3 node cluster, where we will be using CEPH for storage hyperconvergence. We are familiarizing ourselves with CEPH and would love to have someone more experienced chiming in: All of our storage hardware are SSDs. (24x 2TB NVMe, 8 per server)...
  2. S

    Issues creating CEPH EC pools using pveceph command

    I wanted to start a short thread here because I believe I may have found either a bug or a mistake in the Proxmox documentation for the pveceph command, or maybe I'm misunderstanding, and wanted to put it out there. Either way I think it may help others. I was going through the CEPH setup for...
  3. V

    Erasure Code and failure-domain=datacenter

    Please help me with some advice. In my test scheme with three data centers, I need to create an Erasure Code pool for cold data. I used the documentation https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_ec_pools My chosen k=6,m=3 (I also tried from the documentation k=4,m=2) in...
  4. P

    CephFS EC Pool recommendations

    Hi, I have 2 SSD per node. and I have 6 Nodes. which makes its 12 SSD now what will give me a good capacity and resilience against failures. I am confused between choosing - EC21 --- i.e K=2, M=1 - 66% Capacity EC42 --- i.e K=4, M=2 - 66% Capacity EC32 --- i.e K=3, M=2 - 60%...
  5. D

    Ceph - Bluestore - Crash - Compressed Erasure Coded Pool

    We initially tried this with Ceph 12.2.4 and subsequently re-created the problem with 12.2.5. Using 'lz4' compression on a Ceph Luminous erasure coded pool causes OSD processes to crash. Changing the compressor to snappy results in the OSD being stable, when the crashed OSD starts thereafter...