Can I specify PBS namespace in backup jobs?

rahman

Renowned Member
Nov 1, 2010
82
4
73
Hi,

I want to use different retention settings for different types of backup jobs (hourly, daily, weekly etc). We have 2 PBS server, 1 backup jobs use directly and other just sync from first PBS. All PBS has one datastore and one namespace.

I can define retention settings in PVE backup jobs but this does work only in first pbs, so second pbs apply global prune retention setting which is not ok for us (it keeps unwanted number of snapshots for daily, weekly and monthly backups if we tune it for hourly backups). So it seems using one datastore and separate namespaces on PBS should work but this time I need to create multiple storage config per PBS namespace (hourly, daily, weekly, monthly etc) on PVE. Can I set backup namespace in backup jobs?

Regards,

Rahman
 
you can just split your sync job, so that the second PBS (that is not directly used by PVE) has the backups grouped by retention settings. or, you simply don't use a built-in prune job on that second PBS, but your own. you can even automate that by letting PVE set some marker in the backup notes, and parsing that on the sync target to determine how aggressive your pruning should be.
 
you can just split your sync job, so that the second PBS (that is not directly used by PVE) has the backups grouped by retention settings. or, you simply don't use a built-in prune job on that second PBS, but your own. you can even automate that by letting PVE set some marker in the backup notes, and parsing that on the sync target to determine how aggressive your pruning should be.

you can just split your sync job, so that the second PBS (that is not directly used by PVE) has the backups grouped by retention settings.

Can you elaborate more on how to do this? By using VM Ids in separate sync jobs? If I need to use VM ids, it will be a burden to maintain them as VM's cames and goes in PVE backup jobs.

you can even automate that by letting PVE set some marker in the backup notes, and parsing that on the sync target to determine how aggressive your pruning should be.

I need to write custom scripts to automate this?

Thanks for you helps
 
Can you elaborate more on how to do this? By using VM Ids in separate sync jobs? If I need to use VM ids, it will be a burden to maintain them as VM's cames and goes in PVE backup jobs.

yes, this would require a group filter.

I need to write custom scripts to automate this?

yes.

a third approach would be to use "remove_vanished" and only prune on the source side. but that is of course dangerous, in case you accidentally prune too much that would be propagated with the next sync.
 
  • Like
Reactions: rahman
hmmm... yeah, getting in a situation where specifying namespaces for some VMs that needs more aggressive pruning (sflow/netflow type data) would've been a perfect solution, as there aren't tags nor other finer grained pruning on PBS available per se, so now extra and extra storage (even if it's below the current namespace) and the PVE storage checks just... explodes ;(


Will need to investigate then either many more storage or manual scripted prune
 
Well,if you store Netflow that much(multi-TB usually), then it makes sense that db is in a multiple VM's ,and then maybe you can create a different storage on a same PBS with a different namespace. Atleast that is how i do it with NFA.