Hello everybody,
I'm currently evaluating PBS, that is changing away from vzdump file backup and intensifying backup cycles. I read in the PBS documentation quite a lot but unfortunately I haven't understood some details.
Let's assume I've got two PVE instances and two PBS instances on different locations, both standalone (but connected via VPN):
Site 1
pve01
- vm01-01 (100)
- vm01-02 (101)
- vm01-03 (102)
pbs01
- pbs01-datastore01
Site 2
pve02
- vm02-01 (100)
pbs02
- pbs02-datastore01
At site 1 I want to backup all VMs at 06:00 and 18:00 everyday. Therefore I have to create two backup jobs on pve01 as one backup job can only have one execution time (right?) but multiple execution days. Can and/or should I point both backup jobs to the same pbs01-datastore01? Or should I create another pbs01-datastore02 for the second backup job? Deduplication just works on one single datastore (that is, two datastores would mean to have roughly two times the amount of data on disk), right?
To have fast backups it would be nice to be able to use the QEMU incremental backups ("dirty bitmap" feature). Does this still work when using different datastores pbs01-datastore01 and pbs01-datastore02 on one PBS instance as target?
On site 2 I backup the VM everyday at 18:00 to pbs02-datastore01. Now I would like to pull all backups from site 1 to site 2 to have an offsite backup for site 1 using the remotes functionality of PBS. Should I create another datastore pbs02-ds-offsitepbs01 or can/should I use pbs02-datastore01 for the pulled backups? Or will there be a collision due to the VM ids on just one datastore?
As I understand as long I just pull the backups from pbs01 to pbs02 and do not create another backup job on pve01 pointing directly to a datastore on pbs02 the QEMU incremental backups on pbs01 should keep working, right?
Thanks!
I'm currently evaluating PBS, that is changing away from vzdump file backup and intensifying backup cycles. I read in the PBS documentation quite a lot but unfortunately I haven't understood some details.
Let's assume I've got two PVE instances and two PBS instances on different locations, both standalone (but connected via VPN):
Site 1
pve01
- vm01-01 (100)
- vm01-02 (101)
- vm01-03 (102)
pbs01
- pbs01-datastore01
Site 2
pve02
- vm02-01 (100)
pbs02
- pbs02-datastore01
At site 1 I want to backup all VMs at 06:00 and 18:00 everyday. Therefore I have to create two backup jobs on pve01 as one backup job can only have one execution time (right?) but multiple execution days. Can and/or should I point both backup jobs to the same pbs01-datastore01? Or should I create another pbs01-datastore02 for the second backup job? Deduplication just works on one single datastore (that is, two datastores would mean to have roughly two times the amount of data on disk), right?
To have fast backups it would be nice to be able to use the QEMU incremental backups ("dirty bitmap" feature). Does this still work when using different datastores pbs01-datastore01 and pbs01-datastore02 on one PBS instance as target?
On site 2 I backup the VM everyday at 18:00 to pbs02-datastore01. Now I would like to pull all backups from site 1 to site 2 to have an offsite backup for site 1 using the remotes functionality of PBS. Should I create another datastore pbs02-ds-offsitepbs01 or can/should I use pbs02-datastore01 for the pulled backups? Or will there be a collision due to the VM ids on just one datastore?
As I understand as long I just pull the backups from pbs01 to pbs02 and do not create another backup job on pve01 pointing directly to a datastore on pbs02 the QEMU incremental backups on pbs01 should keep working, right?
Thanks!