rebalance

  1. I

    [SOLVED] Ceph hang in Degraded data redundancy

    flow: 1 servers had reboot due to power maintenance, 2 (after the reboot) i noticed one server had bad clock sync - fixing the issue and another reboot solved it) the 3. after time sync fixed cluster started to load and rebalance, 4 it hang at error state (data looks ok and everything stable and...
  2. Y

    Ceph very slow rebalancing ~300Kib

    Hi i have recreated a osd in my hyperconverged cluster. I have a 10Gbit link so it should be really fast to rebalance. But it seems like to rebalance only with some kilobytes: I have already set ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4' ceph tell 'osd.*' injectargs...
  3. A

    Expanding ZFS mirrors

    So I'm migrating my cluster to something a little smaller, and moving from Ceph cluster down to single node with ZFS. One major feature I like from Ceph is that when I add a new drive or twelve, I get data automatically rebalanced across the cluster. Part of my goal is to spend minimal money...
  4. D

    Ceph Node Maintenance with Least Rebalance?

    I have a Proxmox cluster that I am looking at, that they need me to redo the server cause they have asked to have more disks added and the OS disk moved to another disk on the box. I would like if possible to just take the Server down, reinstall the OS onto the new disk, and then bring the disks...