Thank you for your reply. Have a nice day.Never tried it, and it is definitely a "your mileage may vary" situation. Ideally, and that is the beauty of Ceph, you just recreate those OSDs with the new DB SSD. Ceph will do some rebalancing but if you have enough OSDs and Nodes in your cluster, you will never have a reduced redundancy situation.
Ideally, you first set the affected OSDs to out. Wait for Ceph to recreate the data that is on those OSDs somewhere else in the cluster. Once Ceph reports a Health_OK you can stop and destroy those OSDs.
Then recreate them with the new DB SSD and Ceph will rebalance again.
will keep this in mind. thank you for your recommendation.if you have the physical space for adding the new SSD _before_ removing the old one, you could even add a new OSD and set the affected OSD out (not down!) and wait for ceph to rebalance. That would safe you from the second rebalancing while maintaining full redundancy during the operation
@aaron I have 15 OSD. 5 OSD on each SSD and I believe the redundancy is set as 3/2. Better to remove 1 OSD at a time or all 5 OSD that is on the SSD? To be more clear 3 ceph host, and 5 OSD on each host.Never tried it, and it is definitely a "your mileage may vary" situation. Ideally, and that is the beauty of Ceph, you just recreate those OSDs with the new DB SSD. Ceph will do some rebalancing but if you have enough OSDs and Nodes in your cluster, you will never have a reduced redundancy situation.
Ideally, you first set the affected OSDs to out. Wait for Ceph to recreate the data that is on those OSDs somewhere else in the cluster. Once Ceph reports a Health_OK you can stop and destroy those OSDs.
Then recreate them with the new DB SSD and Ceph will rebalance again.