Recent content by Orionis

  1. O

    Partition on ceph corrupt

    Sorry when i post the last message a fourth osd is now down
  2. O

    Partition on ceph corrupt

    Hi it's worse now after the end of rebalancing, i start to repair the pg like this https://medium.com/opsops/recovering-ceph-from-reduced-data-availability-3-pgs-inactive-3-pgs-incomplete-b97cbcb4b5a1 after a few time one of my osd became down . and i can't restart. i bought a new HDD 4To for...
  3. O

    Partition on ceph corrupt

    if i try ceph pg 3.18 mark_unfound_lost revert what happen on the datas in ?
  4. O

    Partition on ceph corrupt

    yes of course i fact in this :https://medium.com/opsops/recovering-ceph-from-reduced-data-availability-3-pgs-inactive-3-pgs-incomplete-b97cbcb4b5a1 he said pg with no data can be recreate without lost data. see: PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS*...
  5. O

    Partition on ceph corrupt

    i see that in the end of your post A situation with 2 replicas can be a bit different, Ceph might not be able to solve this conflict and the problem could persist. So a simple trick could be to chose the latest version of the object, set the noout flag on the cluster, stop the OSD that has a...
  6. O

    Partition on ceph corrupt

    ok but i've not directory in /var/lib/ceph/osd/osd-5/ the directory osd-5 exist but no current.....
  7. O

    Partition on ceph corrupt

    hi, Good new the nearfull osd as gone. cluster: id: 2c042659-77b4-4303-8ecb-3f6a88cd7d54 health: HEALTH_WARN Reduced data availability: 39 pgs inactive, 39 pgs incomplete 41 pgs not deep-scrubbed in time 41 pgs not scrubbed in time services...
  8. O

    Partition on ceph corrupt

    i try this first ceph osd reweight-by-utilization
  9. O

    Partition on ceph corrupt

    ok now i have this ceph health HEALTH_WARN 2 nearfull osd(s); Reduced data availability: 39 pgs inactive, 39 pgs incomplete; 42 pgs not deep-scrubbed in time; 42 pgs not scrubbed in time; 2 pool(s) nearfull [WRN] OSD_NEARFULL: 2 nearfull osd(s) osd.0 is near full osd.8 is near full...
  10. O

    Partition on ceph corrupt

    ok ok i see, i stand the end of rebalancing
  11. O

    Partition on ceph corrupt

    ok I had a 2TO hdd , the system working to rebalance . DO you think the incomplete PG was recover after ?
  12. O

    Partition on ceph corrupt

    HEALTH_WARN noout flag(s) set; 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set; 3 nearfull osd(s); Reduced data availability: 39 pgs inactive, 39 pgs incomplete; Low space hindering backfill (add storage if this doesn't resolve itself): 3 pgs backfill_toofull; 41...
  13. O

    Partition on ceph corrupt

    hello all. after one week the system finish restore but i have alway 40 pgs incomplete. Do you thing this to do can help me ?? https://medium.com/opsops/recovering-ceph-from-reduced-data-availability-3-pgs-inactive-3-pgs-incomplete-b97cbcb4b5a1
  14. O

    Partition on ceph corrupt

    it's possible than this HDD cache was in fault ?
  15. O

    Partition on ceph corrupt

    Module Size Used by veth 32768 0 ebtable_filter 16384 0 ebtables 40960 1 ebtable_filter ip_set 53248 0 ip6table_raw 16384 0 iptable_raw 16384 0 ip6table_filter 16384 0 ip6_tables...