Search results

  1. O

    Partition on ceph corrupt

    i thing the problem is here 024-02-05T21:45:25.452+0100 7f3399cda080 0 _get_class not permitted to load kvs 2024-02-05T21:45:25.452+0100 7f3399cda080 0 _get_class not permitted to load lua 2024-02-05T21:45:25.456+0100 7f3399cda080 0 <cls> ./src/cls/hello/cls_hello.cc:316: loading cls_hello...
  2. O

    Partition on ceph corrupt

    Feb 05 21:27:39 bsg-galactica systemd[1]: Starting Ceph object storage daemon osd.0... Feb 05 21:27:39 bsg-galactica systemd[1]: Started Ceph object storage daemon osd.0. root@bsg-galactica:~# tail -f /var/log/ceph/ceph-osd.0.log 2024-02-05T21:27:45.353+0100 7f2b7677a080 4 rocksdb: EVENT_LOG_v1...
  3. O

    Partition on ceph corrupt

    ok i see Now i add one osd for help the system but i have a other osd who won't start the osd log :c 64s, timeout is 5.000000s 2024-02-05T21:11:57.201+0100 7f70b659a080 -1 bdev(0x561f78ff9c00 /var/lib/ceph/o sd/ceph-0/block) read stalled read...
  4. O

    Partition on ceph corrupt

    just a question why I can copy some files and not others? why when I try a recovery operration with fsck or other this freeze or block?
  5. O

    Partition on ceph corrupt

    yes see cluster: id: 2c042659-77b4-4303-8ecb-3f6a88cd7d54 health: HEALTH_WARN noout flag(s) set 3 nearfull osd(s) Reduced data availability: 40 pgs inactive, 40 pgs incomplete Degraded data redundancy: 54344/3726046 objects...
  6. O

    Partition on ceph corrupt

    ceph health detail HEALTH_WARN noout flag(s) set; 3 nearfull osd(s); Reduced data availability: 40 pgs inactive, 40 pgs incomplete; Low space hindering backfill (add storage if this doesn't resolve itself): 41 pgs backfill_toofull; Degraded data redundancy: 57275/3726142 objects degraded...
  7. O

    Partition on ceph corrupt

    ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 18.19342 - 18 TiB 14 TiB 14 TiB 206 MiB 24 GiB 4.2 TiB 76.67 1.00 - root default -3 8.18707 - 8.2 TiB 7.0 TiB...
  8. O

    Partition on ceph corrupt

    yes see my screenshot
  9. O

    Partition on ceph corrupt

    hello everyone I have a big problem since 3 year i have a ceph cluster for my data. for this i have a pool of 9TO and i created a vm disk of 8.5TO. few day i have one OSD full. i do a little procedure. Out osd for write on vm disk. clean lot of unused file on my vm disk for liberate space. and...
  10. O

    Big fail on ceph

    hello all, first i have a cluster of to node , my ceph work with 6 OSD see img today one of my osd is down and i can restart it in fact there 3 month ago, i have replace osd 5 by a bigger ( before 1TO, now 3To); but i see my pool no upgrade, the second img you see 5.25 to but with new 3To...
  11. O

    ZFS pool faulted

    I everyone There is a few day i extand my zpool for my VM disk. before i have a zpool "VM_Storage3 on 500GB HDD now i ve 1TO hdd i have to node and the probleme was on second node. I migrate succefully extand my zfs pool with this how to...
  12. O

    Ceph storage

    Nope i juste replace un hdd on node 1 Old hdd was 1to New 3to
  13. O

    Ceph storage

    hello all I've a cluster with two node all my vm disk was stocked in HDD and i have a ceph pool for my OMV all work fine for few month , today i want increase the capacity of my ceph storage. i have no space for one disk , i must replace my 1TO by a 3TO first i do what i need for the...
  14. O

    Ceph Pool

    Hello all My situtation i've a to node in proxmox 7 both work in ckuster ceph was installed on both Node1 3 OSD osd1 4To osd2 2To osd3 1To Node 2 Osd4 1To osd5 4To Why some different disk , because money. Two node have a 500go HDD for my VMs (mail, hassio.....) Now my goal its to...