Namespace in relation to backup/snapshot retention

Sep 26, 2023
80
5
8
I've having some issues with dataspace and am thinking of incorporating namespaces to help with retention backups.
I have several 'stagnet' servers - keeping around for 'keep sake' that i want to have both at production and dr locations.
Those backups, along with others easily recreated are being backed up every couple of days and I don't really need to keep all those copies in either place, say only the last 3 months of them.
Using that analogy it seems like my 'all in 1 namespace' could be changed to something like this:

default namespace - normal/daily/weekly/etc backups
namespace - monthly - create backup jobs for those 2/3 servers

On the dr side - i would create different prune/gc jobs for the 2 different namespaces depending on how long i need to keep them.

Is this how others incorporate different retention levels for different 'groups' of servers? I don't think I can 'move' the servers from the 'global' to the new namespace and would have some duplicates for a time - but after that, then I should be better able to manage those backups better.

Make sense - or suggestions on how to best manage different retention backups for different backups? I already have different backups being done based of off some of that info, just haven't fully incorporated that into a replication and retention level at the DR site.
 
Just noting that it's possible to mark a backup as protected and then tell PVE not to back up that disk.

I think what you're describing is essentially what we've done, one category goes into their namespace and a second into their own, with two prune jobs.

I don't know that one can move backups but don't think I've tried. However the deduplication should work across namespaces so the data chunks wouldn't be duplicated.
 
  • Like
Reactions: Johannes S
Just noting that it's possible to mark a backup as protected and then tell PVE not to back up that disk.

I think what you're describing is essentially what we've done, one category goes into their namespace and a second into their own, with two prune jobs.

I don't know that one can move backups but don't think I've tried. However the deduplication should work across namespaces so the data chunks wouldn't be duplicated.
It would be curious and helpful if you could review your jobs (probably sync pull/push) to confirm that the data is only related to each deduplication. I'm trying to figure out the namespace thing so that i can get either the backups, or pull to assign those into the different namespace 'folders'. How or where did you get this steps done at?
 
re: dedupe, I don't have a source handy, but the garbage collection job doesn't have a namespace parameter. It runs on the datasource.

re: setup, take a look at:
https://github.com/jjoelc/pvesetup/blob/main/PBS Initial Setup.md#namespaces

One thing I had to learn, namespaces are entered as "name/space" not "/name/space" as a leading slash will error out.

That same doc shows connecting PBS to PVE at https://github.com/jjoelc/pvesetup/blob/main/PBS Initial Setup.md#connect-pbs-to-pve but unfortunately the picture doesn't match the words. Point is, the namespace (and token, etc.) is set on the PBS "Storage" entry in PVE, and the backup job uses one of the PBS storage entries.