Proxmox Backup Retention "Keep last X backups"

TrueMox

New Member
Nov 29, 2023
27
1
1
Hello,

I have two backup jobs for my container.
1. Daily Snapshot with "Keep last 12"
2. Monthly Snapshot (Stop Mode) with "Keep last 6"

My plan was to keep the daily snapshots for the last 12 days and the monthly snapshot for the last 6 month.

But after the monthly run few hours ago Proxmox deleted ALL backups, until only 6 were left. Is this an error?

That makes no sense to me. I want to differentiate between backup job 1 and 2. And what is the point of providing me with the option individually for each backup job if it always affects all backup jobs anyway? :(

Maybe I don't understand something. I hope someone can help. Thank you very much!
 
do you have another prune setting configured somewhere? They can be configured per backup job or globally per storage. If you configure it in both places, the more aggressive one wins.

If you use the Proxmox Backup Server, you can additionally define prune settings there as well ;-)
 
  • Like
Reactions: TrueMox
pruning happens per group - if both backup jobs backup to the same storage, they will backup to the same group, which means they will both prune that group with their respective settings. if you want to keep those two backup runs separate, then you need to backup to a different namespace (and configure that as a different storage on the PVE side) so that the two groups are independent.
 
  • Like
Reactions: TrueMox
pruning happens per group - if both backup jobs backup to the same storage, they will backup to the same group, which means they will both prune that group with their respective settings. if you want to keep those two backup runs separate, then you need to backup to a different namespace (and configure that as a different storage on the PVE side) so that the two groups are independent.

So this was to ensure guaranteed per group/storage pruning? Shouldn't there be at least a WARN message popping up to make one aware the per job definitions are going to be overriden (easiest probably to throw it during the backup job)? It's extremely counterintuitive as more granular settings usually overrides global one. The OP is a prime example what this could lead to, especially in an environment where different users set the two.
 
  • Like
Reactions: TrueMox
I added my new Backup configuration. Now I created two new folders (storage) for the daily backups and the monthly backups. Then I checked the path on the backup jobs. It should work now, right?
 

Attachments

  • 1722869202046.png
    1722869202046.png
    14.3 KB · Views: 8
  • 1722869233942.png
    1722869233942.png
    17.1 KB · Views: 8
Unfortunately, I don't understand that. I would like to leave it the way I have it now. It would just be nice if someone could confirm whether it works like this. Thanks!
 
Unfortunately, I don't understand that. I would like to leave it the way I have it now. It would just be nice if someone could confirm whether it works like this. Thanks!
if you define two different directories as directory storage, than their contents including how it is pruned is independent, yes.
 
So this was to ensure guaranteed per group/storage pruning? Shouldn't there be at least a WARN message popping up to make one aware the per job definitions are going to be overriden (easiest probably to throw it during the backup job)? It's extremely counterintuitive as more granular settings usually overrides global one. The OP is a prime example what this could lead to, especially in an environment where different users set the two.
OP had two *jobs*, each with prune settings, but backing up the same guest(s) to the same storage. in that case, both jobs are "equal" from a priority point of view, so the more aggressive pruning schedule will determine what gets pruned long-term.

if you define prune settings on the job, and on the storage, then the one from the job (the more specific one) wins. only if you only define a fallback pruning setting on the storage, but no pruning setting on the job will the storage setting win.

if you define pruning both on the client and on the server side with PBS that the more aggressive setting will win, since the pruning is completely independent on both ends.

I hope this helps clear up the misunderstanding ;)
 
I hope this helps clear up the misunderstanding ;)

I literally have to try to reproduce what the OP asked about to make another "helpful" comment, but I simply reacted on comment no #2 which sounded not quite like your last. Thanks for the reply, though!
 
Good that we talked about that, as it seems, I got it a bit wrong how it works in detail.
 
Good that we talked about that, as it seems, I got it a bit wrong how it works in detail.

I just want to say I did not mean to emphasise that. Just explaining where I was coming from. I still think the design is not fortunate if this confusion can arise. If I understood it correctly now, then I would say maybe it should not even be possible to have 2 jobs with same guest & storage defined.
 
then I would say maybe it should not even be possible to have 2 jobs with same guest & storage defined.
it can make sense though - e.g., you can have different hook scripts doing different things, different backup modes (e.g., a common one would be stop mode backup on the weekend, snapshot on business days ;)), protected backups, ..
 
it can make sense though - e.g., you can have different hook scripts doing different things, different backup modes (e.g., a common one would be stop mode backup on the weekend, snapshot on business days ;)), protected backups, ..

Right but they would make sense in different groups only, wouldn't they? ;)
 
no, why? it would be better to distinguish such use cases by targeting different storages (even if those storages are effectively backed by the same disk), but as long as you don't have incompatible pruning settings, using a single storage can work as well..
 
no, why? it would be better to distinguish such use cases by targeting different storages (even if those storages are effectively backed by the same disk), but as long as you don't have incompatible pruning settings, using a single storage can work as well..

I think my point was (even before I knew it;)) that what happened to the OP here would inadvertently happen to anyone, it's not an intuitive design.

Of course it is possible to argue that someone wants to do some gymnastics on a weekend, e.g. once there's a stop mode weekend backup only the last 1 needs to be kept at that point and let's trash the hourly ones from the whole week, etc.

But what is the point of pruning? Space utilisation. It's not really managing space when it is running in a recurrent pattern (i.e. would run out of space during the week anyhow in the above scenario).

What is the point of backups - to have something to depend on. If someone defines 2 jobs with different retention of course he does not expect them to interfere with each other. And if more intricate schedule is intended, it can be safely achieved by distinguishing the groups.

I am pretty sure it is more complex to second-guess the user by identifying "incompatible prune settings", so that's what I suggested why not enforce the one job per guest&storage?

On a separate note, with PBS, I get that the pruning there is independent, but the moment the job is running it clearly can tell that its own retention policy will be ignored and should be throwing a warning about that (consider the PBS admin might not be the same as the PVE one).
 
I agree that there is a foot gun there, but it's one that would only affect a rather niche setup:

- conflicting (different prune settings)
- overlapping (same target, at least partial overlap w.r.t. covered guests)

backup jobs.

any combination of jobs that does not have both those properties is fine, including overlapping ones in general (so we can't just forbid them, that would break a ton of existing setups that work just fine. also, overlapping or not can change after job setup via pools, so it's not even trivial to implement such a check..).

it might be possible to implement a check similar to the "guest not covered by any job" one we already have, that detect and flag this particular, potentially problematic combination (without making it forbidden ;))
 
any combination of jobs that does not have both those properties is fine, including overlapping ones in general (so we can't just forbid them

ok, but a warning in the log won't kill anyone

, that would break a ton of existing setups that work just fine. also, overlapping or not can change after job setup via pools, so it's not even trivial to implement such a check..).

check at job run?

it might be possible to implement a check similar to the "guest not covered by any job" one we already have, that detect and flag this particular, potentially problematic combination (without making it forbidden ;))

yes but it's low priority i can tell :D i just think that not everyone has the same understanding of what a job & storage constitutes - it's pretty logical to assume that when i set up two jobs, they only prune what they produced
 
it's pretty logical to assume that when i set up two jobs, they only prune what they produced
it might seem logical, but there is no connection between the job and the backup after the job is finished ;) a job doesn't own backups, nor is it constant/immutable. all the pruning and limiting logic works on the storage/backup "group" level, and covers all backup of that guest, whether they are created by job A, B or C or manually.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!