Replication between Ceph RDB and ZFS

thelogh

New Member
Jan 22, 2024
13
2
3
Hello everyone,
I have a Proxmox 8.1 cluster with ceph Reef consisting of 3 pve-1, pve-2, pve-3 servers.
We use Proxmox Backup Server 3.1 as a backup system.
The system performs regular backups every evening on the Proxmox Backup Server.
I need to have a copy of the VMs on a Proxmox pve-4 server with poll in ZFS where I can copy some machines. The initial idea was to use the replication integrated into Proxmox.
But from what I understand the integrated replication functionality only works if the source machine is on a ZFS filesystem pool and the destination server must always be on a ZFS filesystem pool. However, my machines using Ceph are on an RBD pool and therefore there is no way to use the integrated replication service on a Server with ZFS.
I have currently created a script that restores VMs on Proxmox pve-4 server with ZFS from PBS.
Is there a way to instead directly sync a machine to Ceph targeting ZFS (pve-4) at regular intervals?
Another question, after restoring a complete backup from the PBS to the pve-4 server, is there a way to update the machine on the pve-4 with the latest differences (maybe I can create another script that sends those from the last backup)?
 
There is currently no way to do incremental replication between different types of storages. An incremental/differential restore is also not something that is currently possible.
I am not sure how feasible such features would be to implement.
 
Thanks for the reply, but currently the PVE sends the data via "vzdump" on the PBS server, and uses "dirty-bitmap", thus speeding up the data transfer, I would like to be able to use the same method in reverse, to save time let's call it differential, dirty-bitmap, etc…
 
In the one direction, from a running VM to a PBS it works. The other way around is not so easy, especially if the source and target VMs are different VMs IIUC.

But as far as I understand, you have one Proxmox VE cluster/node with the source, which gets backed op to PBS. Is the target VM, which you currently restore regularly, in the same cluster or a different one? Maybe even a different location?

If it is in the same Proxmox VE cluster, then I am a bit confused as to what you are trying to achieve, as there would be HA. If the target VM is on another Proxmox VE cluster, then the Replication feature wouldn't work anyway, even if source and target would be using ZFS. But if you need to get that VM up and running quickly in case the source cluster dies, what about a "iive restore" from PBS? Wouldn't that also solve your problem?
 
Once the VM is loaded I am only interested in synchronizing the disk.
The Ceph cluster is located in a different building from the PBS, to avoid problems in the event of fire, theft, flooding, the company has a very large surface area. Leaving everything in the same room doesn't seem appropriate to me (The OVH incident has been forgotten by some :-D).
So in the building where the PBS is present there is a server that restores the latest backup version for some machines so as to always have them ready and switched off in case of problems. Some are huge and the restore can take days. I can start the restore from live but as I said it could work with 2 machines at the same time but not with all. I can't put another ceph cluster. So the only way I have is to restore the machines in the fastest way since thanks to vzdump there are several TB machines that are saved in a few minutes and hours. The PVE is a different cluster but connected to the PBS, I could also insert it into the main cluster by modifying the corosync and setting it to 0 so as not to alter the HA, but I don't see the point since I can't send any data between ceph and zfs. So even mounting a disk with "proxmox-backup-client map" creating a block do you have any ideas for doing a block rsync? :-D
 
The concept as a whole is flawed. The conventional approach requires 2 Ceph clusters with rbd mirroring and rolling snapshots on your prod cluster.

If you can't have same or similar infrastructure in your 2 sites then just accept the fact that it's going to be heterogeneous asymmetrical all the time and work within that. This will be done on your own, this scenario is not handled out of the box by PVE or any included tools.

Use the PBS only for backup of the production Ceph-based RBDs.

Use the standby ZFS-based hosts for only the standby VMs, that you can either build from scratch or as rbd exports. These ZFS VMs will have no ongoing block-level relationship to your Ceph-based VMs. Come up with an application-layer or filesystem-layer replication scheme.
 
Last edited:
I know that the ideal would be to have 2 ceph clusters connected to each other at 100GB in two distant facilities. But my problem is precisely this. So I have to find the best possible solution. The backups between the ceph cluster and the PBS go perfectly at an optimal speed I am satisfied. I read everything about proxmox backup, I understood that it uses a different system for restoring and storing images on ZFS filesystem. I'm here to ask if I missed something, or is there something that can be done by hand or not to send a disk image faster. If it had been stored in a snapshot I would have used "zfs send", but you can't use it, I know. That's why I'm here to ask something, which maybe is only useful to me or maybe it can also be useful to someone else.Since there was already an updated copy on PBS, the idea was to use that. But if there are no other solutions I'll come up with something else.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!