CEPH Recovery/ Rebalance back and forth ?

PmUserZFS

Renowned Member
Feb 2, 2018
144
9
83

Recovery/ Rebalance​


1704962978885.png


What could cause this behaviour ?

all disks are fine, no osd failing. One empty vm are using the ceph as storage so bascially no usage.

I have verified smartctl on all ssds. why is the cluster rebalancing ? now and then ? is that expected behaviour?
 
Thank you for the information. You are correct

Bash:
# ceph osd pool autoscale-status
POOL        SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK   
.mgr       1985k                3.0         3353G  0.0000                                  1.0       1              on         False 
cephpool   4356M                3.0         3353G  0.0038                                  1.0      32              on         False


So its all good and fine :)