Hello all,
I don't have any experience with Ceph and I wanted to get some people's opinions on this:
Here are the two options I'm considering:
OPTION 1
6 spinning hard disks for OSDs, 2 per node. (6 total OSDs)
3 SSD journal disks, 1 per node.
OPTION 2
6 spinning disks but in 3 RAID1 mirrors using Dell's raid cards, 1 raid per node. (3 total OSDs)
3 SSD journal disks, 1 per node.
I don't have any experience with Ceph and I wanted to get some people's opinions on this:
- 3 Node Cluster (2x R510, 1xR520)
- Storage to be used for light VM usage.
Here are the two options I'm considering:
OPTION 1
6 spinning hard disks for OSDs, 2 per node. (6 total OSDs)
3 SSD journal disks, 1 per node.
Drawbacks:
If a disk fails, I have to use the re-build using ceph gui/commands.
If a disk fails, I have to use the re-build using ceph gui/commands.
Advantages:
?
?
OPTION 2
6 spinning disks but in 3 RAID1 mirrors using Dell's raid cards, 1 raid per node. (3 total OSDs)
3 SSD journal disks, 1 per node.
Drawbacks:
?
?
Advantages:
Trivial to replace a failed disk without having to mess with ceph.
Trivial to replace a failed disk without having to mess with ceph.