Incremental backups & Ceph?

I agree, pigz can be dangerous. You can limit the number of CPUs with the -p option in pigz. The nice thing about the rbd stdout option is you can choose whatever compression tool you prefer.
 
Aren't RBD snapshots going to hurt performance if they are active 24/7?

For this backup script to work it has to keep snapshots all the time in order to generate incremental backups. But when I read online about RBD snapshots I read that RBD snapshots aren't meant to be active all the time and are hurting performance a lot?

Experience?
 
Interesting. I have not seen any performance degradation on my side. I wasn't aware of this so I have been researching it all morning. It does seem like a fair amount of people have seen issues with snapshots on their clusters. Do you happen to have a link you could share that mentions snapshots aren't meant to remain on the cluster?
 
This thread was helpful to me, after some further research I came up with this (works end to end (take image and reimport) ):


rbd export vm-138-disk-0@vm-138-disk-0_2021-04-03T15:53:58.290-07:00 - | gzip --fast > devworker.img.gz
............^^^ get this after making the snap(dont have cli command for snap creation sorry)
//...........................................................................................................................^^^ the "-" says stdout! so you can pipe ("|") to whaterver

//get images name (in this example "vm-138-disk-0@vm-138-disk-0_2021-04-03T15:53:58.290-07:00") from command:
rbd ls -l <your pool name>

//note the snap was created via cephfs web GUI so I dont have the cli command for you but should be easy to find? if you dont have web gui... update thread with the snap creation cli command info @Somebody?

restore/import run:
gunzip -c devworker.img.gz | rbd import - vm-138-disk-0
//...................................................................^^^ again the "-" says stdin ("in" this time vs "out")
//obviously make sure the vm is down when you import ... but this works!... like OMG nice!

PS gzip / gunzip no issues with cpu load and compressed 100G drive to under 9G image no problem! (data @~27gigs)

//take away ... finding '-' cli examples was the hardest part.
//if you have a dirty block disk (lots of delete files, eg reduced size from an apex), then consider zero fill of a file to fill disk with 0000's, this will make your image way small! (sorry dont have command atm ... dinner call <may get back on request>)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!