Thanks Alwin, that's what I thought. My DR cluster is now replaying journals of 30 images, just over 4t across a 150m internet circuit via sdwan. Hence the need for a little capacity upgrade. Thanks again for the rbd-mirror instructions!
We never had/allowed a "homogeneous" environment...
I have a similar configuration to yours however I began with hdd's with their journals on a small (200g) ssd. When I added some ssd storage I created the following rules and applied them to the appropriate pools.
ceph osd crush rule create-replicated ssdrule default host ssd
(applied to pool...
Thank you for the 5.4 release and your ongoing efforts to improve the usability of ceph as deployed by Proxmox. Can anyone comment on any improvements planned for the support of rbd-mirror configurations?
I can confirm this behavior.
We used the same process we have followed in the past to replace a failed disk via the gui.
-set osd out, wait for healthy cluster.
-destroy osd and partition.
The cluster immediately began re-balancing which I didn't understand until I looked at the log...
For us the one-way mirror configuration is sufficient and we have successfully tested this between external, Ubuntu hosted ceph clusters. However, we have become accustomed to managing ceph via Proxmox, and we like it. We attempted mirroring Proxmox hosted ceph to a remote Ubuntu hosted ceph but...
For me, this is not a HA conversation. I suspect many of us would be satisfied with one-way mirror. For instance, I could maintain an Ubuntu/Centos ceph cluster at a remote location and replay journaled images there. However, rbd-mirror is not in the Proxmox ceph repository. I think a discussion...