rbd-mirror support

Kmgish

Active Member
May 31, 2015
32
8
28
California
I have seen very little discussion, specific to Proxmox regarding the new feature in the jewel release of ceph supporting replication of pool/images to a remote site for disaster recovery purposes. Is anyone testing this? Will there be support for such replication configuration inside Proxmox?

Thanks
K
 
Recently I tested rbd-mirror support in Proxmox to see if it is feasible for Ceph mirroring between separate datacenters.
I discovered several key points that makes rbd-mirror support unlikely in Proxmox, at least with Jewel and Luminous:
  • Ceph clusters need to have different names. Proxmox 4.4 pveceph tool creates Ceph pool with a default name 'ceph' and there is no configuration option available to choose Ceph cluster name at creation time with pveceph. I tried to cheat this requirement as shown by some chinese bloggers by copying ceph.conf to remote.conf and copying user keyrings as well. All was well until restart when Proxmox takes the Ceph configuration from '/etc/pve/priv' and gets confused by multiple Ceph clusters.
  • RBD mirroring is only available on pools and images which are using librbd (fine for KVM VM`s) but it`s not available on krbd driver (due to lack of feature support) which is needed for LXC containers.
  • No HA support yet in Jewel but it`s somewhat coming Luminous (active/passive daemon instances).
If you are willing to use this feature then you could make two Ceph clusters outside of Proxmox (or inside of Proxmox, creating Ceph cluster manually without using pveceph), configure mirroring between the two clusters and then in PX1 add pool from Ceph1 and in PX2 add pool from Ceph2. In that case you would lose the ability to monitor and configure Ceph clusters within Proxmox. Additionally you would need to synchronise VM configuration files between PX1 and PX2 and manually change the mountpoints for VM`s in the event of failover which would be a manual task.
 
proxmox ve 5.0 will have preliminary support code for storage replication, inside same proxmox cluster, for zfs currently.

I think it should not be difficult to add support for rbd export|rbd import . (almost the same than zfs export| zfs import).

I think it should be possible for rbd mirroring too. (but indeed, 2 ceph cluster in same proxmox cluster, I'm not sure if we can setup 2 differents ceph.conf ).

For cross proxmox cluster replication, failover, we have talked about this recently on dev mailing list. I think it'll take time, but it should be on the roadmap.
 
  • Like
Reactions: Kmgish and Dzy
Thank you both Dzy and spirit for your replies. I was beginning to think I was the only one interested in this feature of ceph. Both of you have provided additional insight I did not have and we will continue our own testing, hopeful that there will be Proxmox integration in the future.
 
Yes, I have! These steps should work with a Ceph cluster which is created with 'ceph-deploy', but when I followed these steps on Proxmox Ceph cluster created with 'pveceph' utility, Proxmox node gets confused with multiple Ceph cluster config files and Ceph cluster fails to start with runtime error.

Thank you spirit for information regarding Proxmox future prospects. I use ZFS snapshots on FreeBSD and I absolutely love that functionality but haven`t tried it on Linux yet. Thanks for pointing to rbd export / import functionality. It could be another possible solution for remote replication if incremental rbd exports are as good as ZFS incremental snapshots. We would, of course, need to copy VM configuration files to the remote site manually currently.
 
Dzy,
It seems there is a the caveat that the machine running rbd-mirror cannot be either an OSD nor a MON, but another Ceph client in the cluster. Is this the way you tried it? Here is where I found this caveat. CEPH: Enabling one-way rbd mirror
If that's a one-way mirror, how would one recover from a disaster in such a case? i.e. if you have a cluster of 3 nodes in the main data center, and a backup server in a remote site, but the main data center is completely down (fire / theft / flood / etc), how would you safely copy / move the VM's to the new servers once setup?
 
Well, the mirror would contain your guest hard drive images. Just make sure you have a copy of the guest configuration files and you should be fine.
 
The rbd-mirror alone can be setup as discussed above. But the major part is what happens on migration (manual/failover;
HA), this part needs inter-cluster awareness and such has to be planned and programmed first. Only after that a discussion about supporting something like a "metro-cluster" can be started.
 
The rbd-mirror alone can be setup as discussed above. But the major part is what happens on migration (manual/failover;
HA), this part needs inter-cluster awareness and such has to be planned and programmed first. Only after that a discussion about supporting something like a "metro-cluster" can be started.
For me, this is not a HA conversation. I suspect many of us would be satisfied with one-way mirror. For instance, I could maintain an Ubuntu/Centos ceph cluster at a remote location and replay journaled images there. However, rbd-mirror is not in the Proxmox ceph repository. I think a discussion around this is merited.
 
However, rbd-mirror is not in the Proxmox ceph repository.
The package is there.
Code:
rbd-mirror/stable 12.2.5-pve1 amd64
  Ceph daemon for mirroring RBD images
 
For us the one-way mirror configuration is sufficient and we have successfully tested this between external, Ubuntu hosted ceph clusters. However, we have become accustomed to managing ceph via Proxmox, and we like it. We attempted mirroring Proxmox hosted ceph to a remote Ubuntu hosted ceph but couldn't get it to work. What I'm asking here is if Proxmox is working toward a configuration to support one-way mirroring and if so, how soon?

We are happily paying customers but our subscriptions would be reduced by 2/3 if I have to convert to Ubuntu hosts for ceph.
 
Our Ceph packages are upstream packages with a handful of patches on top, eg. those that didn't make it in time through upstream's QA and couldn't be included into the release. Besides that, configuration and use is the same as with upstream's packages.

With the above in mind, the rbd-mirror should work with ceph's documentation.
http://docs.ceph.com/docs/luminous/rbd/rbd-mirroring/

Aside from that, the rbd-mirror integration/documentation in some form or another is on my list.
 
Our Ceph packages are upstream packages with a handful of patches on top, eg. those that didn't make it in time through upstream's QA and couldn't be included into the release. Besides that, configuration and use is the same as with upstream's packages.

With the above in mind, the rbd-mirror should work with ceph's documentation.
http://docs.ceph.com/docs/luminous/rbd/rbd-mirroring/

Aside from that, the rbd-mirror integration/documentation in some form or another is on my list.

I am quite interested in this option as well. Is the only real limitation right now the fact that proxmox can't handle different ceph cluster names? Seems like everything else should work 100%.
 
Last edited:
Our Ceph packages are upstream packages with a handful of patches on top, eg. those that didn't make it in time through upstream's QA and couldn't be included into the release. Besides that, configuration and use is the same as with upstream's packages.

With the above in mind, the rbd-mirror should work with ceph's documentation.
http://docs.ceph.com/docs/luminous/rbd/rbd-mirroring/

Aside from that, the rbd-mirror integration/documentation in some form or another is on my list.

Alwin: Is the "rbd-mirror integration/documentation" getting any closer to the top of your list?
 
  • Like
Reactions: hacman
Alwin: Is the "rbd-mirror integration/documentation" getting any closer to the top of your list?

Also very interested in this - we're looking at migrating a large Xen estate that resides in two datacentres to Proxmox, and would love to be able to replace our DRBD storage with Ceph as part of this.
 
  • Like
Reactions: Kmgish

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!