Search results

  1. O

    Partition on ceph corrupt

    hello everyone I have a big problem since 3 year i have a ceph cluster for my data. for this i have a pool of 9TO and i created a vm disk of 8.5TO. few day i have one OSD full. i do a little procedure. Out osd for write on vm disk. clean lot of unused file on my vm disk for liberate space. and...
  2. O

    Big fail on ceph

    hello all, first i have a cluster of to node , my ceph work with 6 OSD see img today one of my osd is down and i can restart it in fact there 3 month ago, i have replace osd 5 by a bigger ( before 1TO, now 3To); but i see my pool no upgrade, the second img you see 5.25 to but with new 3To...
  3. O

    ZFS pool faulted

    I everyone There is a few day i extand my zpool for my VM disk. before i have a zpool "VM_Storage3 on 500GB HDD now i ve 1TO hdd i have to node and the probleme was on second node. I migrate succefully extand my zfs pool with this how to...
  4. O

    Ceph storage

    hello all I've a cluster with two node all my vm disk was stocked in HDD and i have a ceph pool for my OMV all work fine for few month , today i want increase the capacity of my ceph storage. i have no space for one disk , i must replace my 1TO by a 3TO first i do what i need for the...
  5. O

    Ceph Pool

    Hello all My situtation i've a to node in proxmox 7 both work in ckuster ceph was installed on both Node1 3 OSD osd1 4To osd2 2To osd3 1To Node 2 Osd4 1To osd5 4To Why some different disk , because money. Two node have a 500go HDD for my VMs (mail, hassio.....) Now my goal its to...