Understand backup problem

greg

Renowned Member
Apr 6, 2011
140
2
83
Greetings

I have a problem which is crippling me for a while... A CT is being backed up daily to PBS ; it's about 300G in total and about 600M changes daily. The backup process takes more than 14 hours, which data throughput measured by iftop at about 14kbps. Other CTs and VMs on the same host/network seem to be working fine.

The real problem is, while the backup is active, which is more than half the time (14h per day), the CTs is mostly unresponsive (and of course it's my most important server).

I tried to migrate the CT to another host, same thing. The host is Promox 8.1.4, PBS is 2.3.1, the CT is Debian 11.

How can I try to understand what is wrong here??

Thanks in advance

Regards
 
1st update your software. Then show us your hardware please.
And base your proxmox bs on a local 10gbit/s network with ssds on zfs?
 
if you are not using change-detection metadata, then each backup run needs to at least read all of the data. depending on backup mode, it might also need to copy all of it (suspend mode) or shutdown the container for the whole duration of the backup (stop mode).

I second what @news wrote above - update your host, then provide details:
- storage.cfg
- container config
- full backup task log

thanks!
 
  • Like
Reactions: Johannes S
if you are not using change-detection metadata, then each backup run needs to at least read all of the data. depending on backup mode, it might also need to copy all of it (suspend mode) or shutdown the container for the whole duration of the backup (stop mode).
I do not wish to offend anyone, but is "metadata" change-detection very reliable? If so, why is that not the default?
 
it's fairly new, but we no longer consider it experimental. changing the default is hard, as it breaks re-use of existing backup data.
 
  • Like
Reactions: Johannes S
it's fairly new, but we no longer consider it experimental. changing the default is hard, as it breaks re-use of existing backup data.
So I should NOT change this on an existing backup job (where there are already multiple backups completed)?

Create a new job for the same CT's/VM's and once I have a verified "new" backup, then purge the old job's copies?
 
So I should NOT change this on an existing backup job (where there are already multiple backups completed)?

You can nontheless you just needs to be aware of the consequences. And the change only affects lxcs, vm backups uses a different mechanism.
 
You can nontheless you just needs to be aware of the consequences. And the change only affects lxcs, vm backups uses a different mechanism.
To "reset" the backup chain for a LXC, can I just use the "Forget" on the PBS GUI? Then the next backup for that job will know to start from scratch?
 
To "reset" the backup chain for a LXC, can I just use the "Forget" on the PBS GUI? Then the next backup for that job will know to start from scratch?
You can but you can also keep the old backups. The worst will be that you will use more space untill all old backups are pruned
 
Sorry, maybe I misunderstood what you meant by "breaks re-use of existing backup data".

What are the consequences if I change an existing job to "Metadata" (from Default)? Will it break the backups? Will it not use the new setting? Or will it maybe take 1-2 backups before the metadata setting has enough info to work from and it will (hopefully) be faster thereafter, but everything remains consistent?
 
it will require more space, as the snapshots created after you changed the setting will contain different chunks than the ones before, even if the input data remains the same. no backups are broken, just the data re-use / deduplication doesn't work and the backups take up more space if you mix both modes.