Hi,
now I have a running ceph three-node cluster with two ssd storage nodes and one monitor.
What I would like to achieve is adding another two storage nodes with spinning drives and create a new pool and keep these two pools separated.
I suppose I need to edit crushmap something like here http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/ to add the new hosts and drives but I am not sure how does it deal with pveceph.
Should I install the new nodes with pveinstall and then create osds via gui or rather via cli ceph-disk zap or it does not matter?
Will it not lunch a rebuilding of the current ceph pool?
This is my current output of ceph osd tree
# id weight type name up/down reweight
-1 1.68 root default
-2 0.84 host cl2
0 0.21 osd.0 up 1
1 0.21 osd.1 up 1
2 0.21 osd.2 up 1
3 0.21 osd.3 up 1
-3 0.84 host cl1
4 0.21 osd.4 up 1
5 0.21 osd.5 up 1
6 0.21 osd.6 up 1
7 0.21 osd.7 up 1
Thank you for all the answers
now I have a running ceph three-node cluster with two ssd storage nodes and one monitor.
What I would like to achieve is adding another two storage nodes with spinning drives and create a new pool and keep these two pools separated.
I suppose I need to edit crushmap something like here http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/ to add the new hosts and drives but I am not sure how does it deal with pveceph.
Should I install the new nodes with pveinstall and then create osds via gui or rather via cli ceph-disk zap or it does not matter?
Will it not lunch a rebuilding of the current ceph pool?
This is my current output of ceph osd tree
# id weight type name up/down reweight
-1 1.68 root default
-2 0.84 host cl2
0 0.21 osd.0 up 1
1 0.21 osd.1 up 1
2 0.21 osd.2 up 1
3 0.21 osd.3 up 1
-3 0.84 host cl1
4 0.21 osd.4 up 1
5 0.21 osd.5 up 1
6 0.21 osd.6 up 1
7 0.21 osd.7 up 1
Thank you for all the answers
Last edited: