That's correct. Be sure to let it go healthy for safety between stop and destroy.
Our users don't feel a thing...until I get impatient and increase osd-max-backfills too high. ;)
The answer to your question is yes.
We used the excellent wiki article referenced above by Alwin to mirror disk images from our HQ cluster to our DR cluster located at a remote datacenter. Communication is via sd-wan and a 150m broadband internet circuit. We are currently mirroring 27 disk...
We have an 8-node cluster, 5 for ceph in which we have both ssd and hdd class disks. We have recently been upgrading our capacity by replacing 4T disks with 14T disks. The process has been uneventful. We set the target osd to out and stop it. Wait for the cluster to re-balance, then destroy and...
Thanks Alwin, that's what I thought. My DR cluster is now replaying journals of 30 images, just over 4t across a 150m internet circuit via sdwan. Hence the need for a little capacity upgrade. Thanks again for the rbd-mirror instructions!
dmulk:
We never had/allowed a "homogeneous" environment...
I have a similar configuration to yours however I began with hdd's with their journals on a small (200g) ssd. When I added some ssd storage I created the following rules and applied them to the appropriate pools.
ceph osd crush rule create-replicated ssdrule default host ssd
(applied to pool...
Thank you for the 5.4 release and your ongoing efforts to improve the usability of ceph as deployed by Proxmox. Can anyone comment on any improvements planned for the support of rbd-mirror configurations?
Thanks,
K
I can confirm this behavior.
We used the same process we have followed in the past to replace a failed disk via the gui.
-set osd out, wait for healthy cluster.
-stop osd.
-destroy osd and partition.
The cluster immediately began re-balancing which I didn't understand until I looked at the log...
For us the one-way mirror configuration is sufficient and we have successfully tested this between external, Ubuntu hosted ceph clusters. However, we have become accustomed to managing ceph via Proxmox, and we like it. We attempted mirroring Proxmox hosted ceph to a remote Ubuntu hosted ceph but...
For me, this is not a HA conversation. I suspect many of us would be satisfied with one-way mirror. For instance, I could maintain an Ubuntu/Centos ceph cluster at a remote location and replay journaled images there. However, rbd-mirror is not in the Proxmox ceph repository. I think a discussion...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.