Hi guys,
Firstly I would like to thank to all the people that are working on such a product that you just deploy and almost forget about it, without the need to continuously fix and repair, while focusing on the upper services ( vms ) that the platform helps you to roll.
Secondly I am sorry if this subject has been discussed before, and I might be too lazy for searching for the answer.
My question is related to the actual deployment that I am involved in, which involves starting with a set of 4 x Dell PE R640 nodes. They are equipped with Perc H730p mini controller with a 2Gb cache, while the data disks are all 1.9TB sas ssd 12Gb/s ( 4 per node ). It ssems that with this card I can configure a mixed mode of Raid1 for OS disks and passthrough for the rest of the data disks - if the case.
The plan is to use the platform as a very nice HyperConverged setup, and having Ceph to be used as underlying storage. I have some past Ceph experiences starting Jewel , and i've been through a lot of issues regarding disks and controller types when we talk about performance.
Now, being the fact that I am starting with only 4 nodes and all disks are enterprise grade ssd:
1. Would it be better to configure each of the osd's disks as:
- raid0 with cache enabled
- passthrough with cache enabled
- passthrough without cache
2. I have also quoted 2 x write intensive 400GB disks per node to be used as journal disks:
- should I take them out and not consider using separate journal disks being that osds are all sas ssd's ?
- if using journal disks is still recommended, should I use them in a raid1 array for having raid fault tolerance ?
3. Any other recommendations regarding controller / disks / raid / passthrough setup ?
Firstly I have considered Perc h330, but I have gave up on it, because the lack of cache and some other's users bad performance experience.
Thank you so much, and have a very nice weekend !
Cheers,
Leo
Firstly I would like to thank to all the people that are working on such a product that you just deploy and almost forget about it, without the need to continuously fix and repair, while focusing on the upper services ( vms ) that the platform helps you to roll.
Secondly I am sorry if this subject has been discussed before, and I might be too lazy for searching for the answer.
My question is related to the actual deployment that I am involved in, which involves starting with a set of 4 x Dell PE R640 nodes. They are equipped with Perc H730p mini controller with a 2Gb cache, while the data disks are all 1.9TB sas ssd 12Gb/s ( 4 per node ). It ssems that with this card I can configure a mixed mode of Raid1 for OS disks and passthrough for the rest of the data disks - if the case.
The plan is to use the platform as a very nice HyperConverged setup, and having Ceph to be used as underlying storage. I have some past Ceph experiences starting Jewel , and i've been through a lot of issues regarding disks and controller types when we talk about performance.
Now, being the fact that I am starting with only 4 nodes and all disks are enterprise grade ssd:
1. Would it be better to configure each of the osd's disks as:
- raid0 with cache enabled
- passthrough with cache enabled
- passthrough without cache
2. I have also quoted 2 x write intensive 400GB disks per node to be used as journal disks:
- should I take them out and not consider using separate journal disks being that osds are all sas ssd's ?
- if using journal disks is still recommended, should I use them in a raid1 array for having raid fault tolerance ?
3. Any other recommendations regarding controller / disks / raid / passthrough setup ?
Firstly I have considered Perc h330, but I have gave up on it, because the lack of cache and some other's users bad performance experience.
Thank you so much, and have a very nice weekend !
Cheers,
Leo
Last edited: