pveceph pool ls --noborder
ceph balancer status
[code][/code]
tags, or use the formatting options of the editor.Which is fine, but it seems that the cluster got too full, and the data was not as equally distributed as would be good. It might have filled up faster than the balancer would have been able to equalize the distribution across the OSDs.I just used the initial settings recommended by the GUI
You would have to define a different device class for each of the 2 disks in a node and use device classes, and device class specific rules to make sure that a pool will only use one of the OSDs.In this case, would it be correct to create 2 pools? each pool with 1 OSD per NODE?
We use essential cookies to make this site work, and optional cookies to enhance your experience.