Search results

  1. L

    [SOLVED] ceph tot > desaster recovery

    Hallo Zusammen Vielleicht kann mir jemand helfen; mein ceph ist tot & ich kann ihn nicht wiederbeleben :-( Der betroffene Cluster hat 7 Knoten 2 OSDs je Knoten (1 x HDD, 1 x SSD) 4 Pools: cephfs, hdd-only mit erasure, hdd-only mit 3/2, ssd-only mit 3/2 3 ceph-mon auf n01 n03 n05 (wobei n01...
  2. L

    Replace dead node that had ceph services running

    Hello all I have a dead node (system hd went bust) with ceph OSDs and monitor running. The manual describes that the node can be removed and that a new node with same IP and hostname can in fact be added provided it is a fresh PVE install. However, with ceph, things might be more difficult...
  3. L

    pveceph osd create /dev/sda fails

    Hi all Prologue: I had a unresponsive node (let's call it #6) which I could ping; the node's osd was up and in; however I could not ssh into it (err: "broken pipe" directly after entering the password). So i turned it off; then on. It booted, however it's osd did not start Next I updated all...
  4. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi all My cluster consists of 6 nodes with 3 OSDs each (18 OSDs total), pve 6.2-6 and ceph 14.2.9. BTW, it's been up and running fine for 7 months now and went through all updates flawlessly so far. However, after rebooting the nodes one after the other upon updating to 6.2-6, the 3 OSDs on...