[SOLVED] Understanding datastores and (QEMU) incremental backups

bockhold

Active Member
Jan 13, 2018
26
2
43
40
Hello everybody,

I'm currently evaluating PBS, that is changing away from vzdump file backup and intensifying backup cycles. I read in the PBS documentation quite a lot but unfortunately I haven't understood some details.

Let's assume I've got two PVE instances and two PBS instances on different locations, both standalone (but connected via VPN):

Site 1

pve01
- vm01-01 (100)
- vm01-02 (101)
- vm01-03 (102)

pbs01
- pbs01-datastore01

Site 2

pve02
- vm02-01 (100)

pbs02
- pbs02-datastore01

At site 1 I want to backup all VMs at 06:00 and 18:00 everyday. Therefore I have to create two backup jobs on pve01 as one backup job can only have one execution time (right?) but multiple execution days. Can and/or should I point both backup jobs to the same pbs01-datastore01? Or should I create another pbs01-datastore02 for the second backup job? Deduplication just works on one single datastore (that is, two datastores would mean to have roughly two times the amount of data on disk), right?

To have fast backups it would be nice to be able to use the QEMU incremental backups ("dirty bitmap" feature). Does this still work when using different datastores pbs01-datastore01 and pbs01-datastore02 on one PBS instance as target?

On site 2 I backup the VM everyday at 18:00 to pbs02-datastore01. Now I would like to pull all backups from site 1 to site 2 to have an offsite backup for site 1 using the remotes functionality of PBS. Should I create another datastore pbs02-ds-offsitepbs01 or can/should I use pbs02-datastore01 for the pulled backups? Or will there be a collision due to the VM ids on just one datastore?

As I understand as long I just pull the backups from pbs01 to pbs02 and do not create another backup job on pve01 pointing directly to a datastore on pbs02 the QEMU incremental backups on pbs01 should keep working, right?

Thanks!
 
Hello everybody,

I'm currently evaluating PBS, that is changing away from vzdump file backup and intensifying backup cycles. I read in the PBS documentation quite a lot but unfortunately I haven't understood some details.

Let's assume I've got two PVE instances and two PBS instances on different locations, both standalone (but connected via VPN):

Site 1

pve01
- vm01-01 (100)
- vm01-02 (101)
- vm01-03 (102)

pbs01
- pbs01-datastore01

Site 2

pve02
- vm02-01 (100)

pbs02
- pbs02-datastore01

At site 1 I want to backup all VMs at 06:00 and 18:00 everyday. Therefore I have to create two backup jobs on pve01 as one backup job can only have one execution time (right?) but multiple execution days. Can and/or should I point both backup jobs to the same pbs01-datastore01? Or should I create another pbs01-datastore02 for the second backup job? Deduplication just works on one single datastore (that is, two datastores would mean to have roughly two times the amount of data on disk), right?

To have fast backups it would be nice to be able to use the QEMU incremental backups ("dirty bitmap" feature). Does this still work when using different datastores pbs01-datastore01 and pbs01-datastore02 on one PBS instance as target?

On site 2 I backup the VM everyday at 18:00 to pbs02-datastore01. Now I would like to pull all backups from site 1 to site 2 to have an offsite backup for site 1 using the remotes functionality of PBS. Should I create another datastore pbs02-ds-offsitepbs01 or can/should I use pbs02-datastore01 for the pulled backups? Or will there be a collision due to the VM ids on just one datastore?

As I understand as long I just pull the backups from pbs01 to pbs02 and do not create another backup job on pve01 pointing directly to a datastore on pbs02 the QEMU incremental backups on pbs01 should keep working, right?

Thanks!
You are almost there then ;)
You should absolutely backup both jobs 6 & 18 on site 1 to the same datastore.
The dirty bitmap works only if you backup the same VM to the same datastore.
On site 2 where you pull the remote sync, you should do that to a created datastore, not the one you backup site 2 VMs to. Not only for the VMids but also for the sync/pruning. It's a good understanding that this synced datastore is a complete backup of the one on site 1.
You are absolutely right on your last sentence regarding the sync and QEMU dirty bitmaps.
 
Last edited:
Thanks for answering, @oversite ! I just started setting everything up and it starts working. :)

What I came around just now: I should configure pruning, garbage collection and verifying on each and every datastore in pbs01 and pbs02 (and better not on the backup jobs on pve01 and pve02), right? On pbs02 I need to configure that on pbs02-datastore02 (for pulling the offsite backups), too, as else it keeps pulling all the data and adding it up in this datastore but not observing old data getting deleted on the source pbs01-datastore01, correct?
 
Thanks for answering, @oversite ! I just started setting everything up and it starts working. :)

What I came around just now: I should configure pruning, garbage collection and verifying on each and every datastore in pbs01 and pbs02 (and better not on the backup jobs on pve01 and pve02), right? On pbs02 I need to configure that on pbs02-datastore02 (for pulling the offsite backups), too, as else it keeps pulling all the data and adding it up in this datastore but not observing old data getting deleted on the source pbs01-datastore01, correct?
It's a matter of taste to do the pruning on the backup server or limit backups at PVE backup time, i prefer to prune on the backup server. For the sync job, there is a check box on the sync job, "Remove vanished" that indeed deletes if backups are removed at the source datastore. However, even if you prune or not on the remote, you most likely need to run garbage collect on the remote at intervals, weekly or monthly to actually remove the unused space.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!