GC very slow after 3.4 update

I'm having same problems with GC and pbs 3.4, it takes 3 times more than pbs 3.3 with same amount of data.
I have pbs virtualized with proxmox ve, instead of downgrading, I install another VM with pbs 3.3 with same backup disk, and voilà! speed comes again.
I will stay in 3.3 until futher tests with new versions.
 
  • Like
Reactions: tcabernoch
I'm having same problems with GC and pbs 3.4, it takes 3 times more than pbs 3.3 with same amount of data.
I have pbs virtualized with proxmox ve, instead of downgrading, I install another VM with pbs 3.3 with same backup disk, and voilà! speed comes again.
I will stay in 3.3 until futher tests with new versions.
how does your storage layout look like?
 
I am concerned, and I've frozen updates on PBS.
We have a lot in the air with PBS right now. I don't know how exposed I am to this issue.
staying on 3.3 which has a known bug because you might be affected by an issue in 3.4 that affects a tiny amount of setups so far seems like a bad strategy.
 
I'm having same problems with GC and pbs 3.4, it takes 3 times more than pbs 3.3 with same amount of data.
I have pbs virtualized with proxmox ve, instead of downgrading, I install another VM with pbs 3.3 with same backup disk, and voilà! speed comes again.
I will stay in 3.3 until futher tests with new versions.
In addition to the storage layout and parameters as already requested by @fabian, please also share your GC task logs, ideally for both the PBS v3.3 and v3.4 runs. Further, if the size of your datastore allows, please also try to capture the strace logs, as requested here.
 
staying on 3.3 which has a known bug because you might be affected by an issue in 3.4 that affects a tiny amount of setups so far seems like a bad strategy.
I respect your opinion, but you work for a vendor that moves fast and breaks stuff.
I'm one of the people's who's stuff gets broken.
See how that works?

I'm following SOP for when a bad update drops. Everyone freezes in their tracks.
The fix to the fix to the fix is the one that really gets ya. Look at the whole fleecing saga. Still aint really fixed ...
 
  • Like
Reactions: carles89
I checked for a new update for PBS (3.4.1-1):

Code:
* garbage collection: account for created/deleted index files concurrently to GC to avoid potentially confusing log messages.

* garbage collection: fix rare race in chunk marking phase for setups doinghhigh frequent backups in quick succession while immediately pruning to a single backup snapshot being left over after each such backup.

Does this resolve the issue reported in this thread?
 
Does this resolve the issue reported in this thread?
Hi,
unfortunately we were never able to reproduce the issue you reported and from the so far provided information no culprit could be identified. Version 3.4.2-1 includes some minor improvements to the content iterators which were identified during investigating this issue, but these are not expected to cause the major runtime differences you and a few other users reported. The commits you referenced above are already part of 3.4.1-1, which is the version of your initial report.

Common to all the reports was that the datastores were located on ZFS with spinning disks, a potential cause could be arc dnode cache bloat, see the suggestion here https://forum.proxmox.com/threads/p...tzner-sx65-garbage-collect.167488/post-778805

Can you share the output of your arc_summary.

If possible, perform an upgrade to the latest available version and see if the issue persists, checking also the arc_summary and arc stat outputs.
 
Last edited: