I am at my first experiences with ceph, so I don't know it so much.What do you expect by creating multiple pools and assigning them dedicated OSDs (device class) instead of one big pool?
No simple way to assign (i.e.), 2 disks for a pool and other 2 disks for another (for every server of course)?Without assigning the pools dedicated OSDs (giving each group a specific device class and assign the pools a rule that targets the specific device class), they will use the same OSDs.
Are the disks for the OSDs all the same model?
Do not try to compare it to RAIDCeph distribute efficiently the data between disks (having a RAID-0 effect)?
ceph osd set-nearfull-ratio 0.6
Possible, but keep in mind that OSDs might also be CPU bound.But I suppose the limit will be the network (25Gbit in my configuration)
Yes. Depending on the spec, you might see the cluster performance change a bit. They should be of the same size (roughly). As larger disks get more data stored on them and therefore more load. Which could turn them into bottlenecks.The disks are ok also if some are from different vendor?