Hi,
my first tests with ceph are really interesting and in runs great.
What I haven't found out: Is ceph considering only the OSDs for distribution or also the nodes?
Just a theoretical setup to show what I mean:
2 Servers with 3 3 TB HDDs for Ceph
1 Server with 6 1.5 TB HDDs for Ceph
So when Ceph distributes only evenly over the OSDs I have a bigger problem is the 6-HDD-Node will be faulty.
If Ceph trys to distribute the things evenly over the nodes it would be more safe for this case.
Therefore I am wondering what is the default behavior?
If I unterstood correctly for special behaviours there are many ways to modify the crush map.
Thanks
my first tests with ceph are really interesting and in runs great.
What I haven't found out: Is ceph considering only the OSDs for distribution or also the nodes?
Just a theoretical setup to show what I mean:
2 Servers with 3 3 TB HDDs for Ceph
1 Server with 6 1.5 TB HDDs for Ceph
So when Ceph distributes only evenly over the OSDs I have a bigger problem is the 6-HDD-Node will be faulty.
If Ceph trys to distribute the things evenly over the nodes it would be more safe for this case.
Therefore I am wondering what is the default behavior?
If I unterstood correctly for special behaviours there are many ways to modify the crush map.
Thanks