Hi Aaron,
root@cephnode1:~# pveceph pool ls --noborder
Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used
.mgr 3 2 1 1 1 on replicated_rule 2.00872318600887e-06 3354624
cephStrg 3 2 128 128 on replicated_rule 0.957875311374664 37974788415785
root@cephnode1:~# ceph balancer status
{
"active": true,
"last_optimize_duration": "0:00:00.000584",
"last_optimize_started": "Tue Jul 16 10:30:41 2024",
"mode": "upmap",
"no_optimization_needed": true,
"optimize_result": "Too many objects (0.050333 > 0.050000) are misplaced; try again later",
"plans": []
}
I just used the initial settings recommended by the GUI
>>>And you should rethink your cluster setup. In a 3-node cluster you should have either only 1 or at least 4 OSDs. Because if one of the OSDs fails, Ceph will try to recover the lost data to the remaining OSDs in the same node, which will also very quickly put you into a situation where there is not enough space.
I need to delve deeper into this subject.