Search results

  1. I

    various ceph-octopus issues

    Hello Alvin following your recommandations initiated a successful ceph self-healing and led to a healty ceph-cluster again - great! # ceph -s cluster: id: ae713943-83f3-48b4-a0c2-124c092c250b health: HEALTH_WARN 2 pools have too many placement groups services...
  2. I

    various ceph-octopus issues

    you wrote: Well, because Ceph is not responding in time. Try to set norebalance and norecovery temporary. Then restart the OSDs one at a time. I've done that 44/45 osd came back - one (osd.7) was kicked out after serveral attemps to start it - will replace this one tomorrow. ceph is now heavily...
  3. I

    various ceph-octopus issues

    you wrote: It's on for new pools, not existing ones. For the others its set to warn. Nope - vm_store was my one and only pool before updating to octopus and it was set to 1024 pgs originally, which was changed to autoscale Someboy misbehaved?!
  4. I

    various ceph-octopus issues

    Changed the pool values: POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE vm_store 335.4G 3.0 11719G 0.0859 1.0 1024 128...
  5. I

    various ceph-octopus issues

    Hi Alvin - thanks for your quick answers! ad your answer to 5. - That's exactly what I tired to - but it failed - so the question is how to cleanup the residues (rbd error file not found , etc.) in order to be able to repeat a sucessful restore? all three vms are zombies - I'am not able to...
  6. I

    various ceph-octopus issues

    Hello Forum! I run a 3 nodes hyper-converged meshed 10GbE ceph cluster, currently updated to the latest version on 3 identical hp server as a test-environment (pve 6.3.3 and ceph octopus 15.28, no HA,) with 3 x 16 SAS-HDs connected via HBA, 3x pve-os + 45 osds, rbd only, activated...
  7. I

    Upgraded to VE 6.3 ceph manager not starting

    Thanks for sharing your experiences! We run a 3-nodes full meshed hyper-converged ceph cluster - and face the same issue after upgrading to 6.3.2. Disabling the dashboard, as suggested, lead to a HEALTH_OK status of ceph again. We will wait with ceph update to octopus until the situation is...