Incremental backups & Ceph?

SanderM

Member
Oct 21, 2016
40
1
6
40
There exists a patch for proxmox that implements incremental backups. It's available here.

I read that it's not compatible with Ceph so I wonder if there's a way to do incremental backups using Ceph + Proxmox one way or another?

I really need incremental backups ...
 
Okay. So for rbd snapshots+exports. Is there any script available for that already or will I have to write something myself?

And before I can use this I need to look into more detail why making a snapshot of a VM running CloudLinux always freezes the VM completely.
I tried that before and as soon as I create a snapshot of VM's running CloudLinux, they freeze and the snashot creation process seems to never end... Other OS's are not affected.
 
Okay. So for rbd snapshots+exports. Is there any script available for that already or will I have to write something myself?

And before I can use this I need to look into more detail why making a snapshot of a VM running CloudLinux always freezes the VM completely.
I tried that before and as soon as I create a snapshot of VM's running CloudLinux, they freeze and the snashot creation process seems to never end... Other OS's are not affected.

Are you creating a VM snapshot of CloudLinux or a ceph snapshot of the disk of the CloudLinux VM?

I have written a script that will perform a nightly export of ceph images. On Saturday night, it takes a full export of the disk image from ceph. Every other day of the week it takes a differential from that full export. I accomplish this using snapshots. It exports to a NFS share.

I also wrote a restore script to help my operators restore the proper files back to ceph if needed. If they try to restore a differential, the script informs the user that it will first restore the full image and then the differential.

If you are interested I can post the script. It is just a BASH script.
 
  • Like
Reactions: chrone
Here you go. Please go easy on me, I am not great at BASH scripting :) But all feedback is welcome.

I plan to modify the restore script so it will act like vzdump with vma files. Right now it just restores the image to the ceph cluster. Soon the script will restore the vm.conf and all disks that were backed up with the ceph backup script. I also want to figure out how to have the script register in the gui under the task pane to show its backing up and give a status.

https://github.com/valeech/proxmox_ceph_backups
 
  • Like
Reactions: chrone
About the CloudLinux freezing: I just use the "Take snapshot" button from proxmox gui. It works for every single VM except those running CloudLinux inside. The dialog that opens when it starts to create a snapshot never finishes and the VM freezes completely. No idea why that is.
Maybe it helps if I install the qemu agent inside the cloudlinux VM?

About your script: awesome! Please post back if you update your script with the additional features you already suggested. I'm very very interested in that!!! THANKS
 
The snapshot button in the GUI creates a VM snapshot. Ceph rbd snapshots are completely different. My script makes use of them heavily. Hopefully they will work and my script will work for you.

I'll keep you posted on the script.
 
Here you go. Please go easy on me, I am not great at BASH scripting :) But all feedback is welcome.

I plan to modify the restore script so it will act like vzdump with vma files. Right now it just restores the image to the ceph cluster. Soon the script will restore the vm.conf and all disks that were backed up with the ceph backup script. I also want to figure out how to have the script register in the gui under the task pane to show its backing up and give a status.

https://github.com/valeech/proxmox_ceph_backups
Thanks! We'll have a look into that.

The main problem here is that you need a working ceph cluster to restore ;)
Thats the biggest benefit of PVEs VMA backups, they can be restored an any other PVE machine even with local storage (if the ceph cluster is completely destroyed i.e. what hopefully never happens :)
 
The snapshot button in the GUI creates a VM snapshot. Ceph rbd snapshots are completely different. My script makes use of them heavily. Hopefully they will work and my script will work for you.

I'll keep you posted on the script.


So your script is not interacting with the guest OS (through qemu-agent for example) and the backups are maybe (more) inconsistent compared to Proxmox's VM backup method?
 
Yes, but proxmox backup system is sending commands to qemu-agent (running inside the guest) in order to freeze the filesystem momentarily or something, in orde to make the snapshot more consistent before taking a backup. I don't know if this is worth it but this is something your script won't do, right?

I'm not sure if there's any way we could improve backup consistency by telling guest VM's to flush it data, somehow, before your scripts creates an rbd snapshot...
 
Thanks! We'll have a look into that.

The main problem here is that you need a working ceph cluster to restore ;)
Thats the biggest benefit of PVEs VMA backups, they can be restored an any other PVE machine even with local storage (if the ceph cluster is completely destroyed i.e. what hopefully never happens :)

Actually you don't need a working ceph cluster to restore. rbd export creates raw image files. Those images can them be mapped directly to VMs or converted using qemu to another format. I know this works with a full export. For differential exports, it looks like I will have to write my own tool to replay them onto the full image.

The mechanism I use in my script to get the differential exports to work is this:

create ceph snapshot of disk on Saturdayn (rbd snap create)
backup full snapshop (rbd export pool/disk@snap)
create ceph differential export Sun-Fri using the snapshot from Saturday as the source (rbd export-diff --from-snap)

If you changed the backup strategy to always full backup rather than differential, you would be able to mount all you backups directly.
 
  • Like
Reactions: robhost
Yes, but proxmox backup system is sending commands to qemu-agent (running inside the guest) in order to freeze the filesystem momentarily or something, in orde to make the snapshot more consistent before taking a backup. I don't know if this is worth it but this is something your script won't do, right?

I'm not sure if there's any way we could improve backup consistency by telling guest VM's to flush it data, somehow, before your scripts creates an rbd snapshot...

You are right, my script does not do this currently. It should be pretty simple to quiesce the filesystems and then do the ceph snapshot just to make sure everything is good.
 
Actually you don't need a working ceph cluster to restore. rbd export creates raw image files. Those images can them be mapped directly to VMs or converted using qemu to another format. I know this works with a full export. For differential exports, it looks like I will have to write my own tool to replay them onto the full image.

The mechanism I use in my script to get the differential exports to work is this:

create ceph snapshot of disk on Saturdayn (rbd snap create)
backup full snapshop (rbd export pool/disk@snap)
create ceph differential export Sun-Fri using the snapshot from Saturday as the source (rbd export-diff --from-snap)

If you changed the backup strategy to always full backup rather than differential, you would be able to mount all you backups directly.
Thanks for clarifying! Raw sounds good, but the backups are fullsized filesystems then, right? What about compression?
 
Good question about compression. I installed pigz compressor on one of my nodes in my cluster. Its a parallel compressor based on gzip so it is really fast.

In order to add add compression, when exporting with the rbd command, instead of specifying an output file to the rbd command, specify a - (stdout) and then pipe that to the pigz command and specify the output filename there.

Example:

rbd export --rbd-concurrent-management-ops 20 pool/image - | pigz > /nfsshare/rbdexport.img.gz

pigz has a whole host of options you can give it to speed up the compression. I used the defaults and the compression added no significant time to the process. I will add this as a command-line option to the backup script.

I was able to export a 32G image down to a 5G file to a standalone proxmox host. I then unzipped it and mounted it directly to a new VM. The VM started right up, proving you can restore without a ceph cluster.
 
Sounds great, but pigz can burn all CPU cores, which can cause problems in in VM enviroment ;-)
You could test lzop, which PVE also uses for VMA format. Its very fast with medium compression, which is enough in most cases.

Mounting it as RAW ist very nice, indeed, and makes it really interesting ;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!