GC very slow after 3.4 update

Aug 3, 2023
16
2
3
Hi,

As reported in Proxmox Backup Server 3.4 release forum, I am facing extreme slowness in GC task after upgrade: Here

proxmox-backup-manager version --verbose
Code:
proxmox-backup                     3.4.0         running kernel: 6.8.12-9-pve
proxmox-backup-server              3.4.1-1       running version: 3.4.1
proxmox-kernel-helper              8.1.1
pve-kernel-6.2                     8.0.5
proxmox-kernel-6.8                 6.8.12-10
proxmox-kernel-6.8.12-9-pve-signed 6.8.12-9
proxmox-kernel-6.8.4-3-pve-signed  6.8.4-3
proxmox-kernel-6.2.16-20-pve       6.2.16-20
proxmox-kernel-6.2                 6.2.16-20
pve-kernel-6.2.16-3-pve            6.2.16-3
ifupdown2                          3.2.0-1+pmx11
libjs-extjs                        7.0.0-5
proxmox-backup-docs                3.4.1-1
proxmox-backup-client              3.4.1-1
proxmox-mail-forward               0.3.2
proxmox-mini-journalreader         1.4.0
proxmox-offline-mirror-helper      unknown
proxmox-widget-toolkit             4.3.10
pve-xtermjs                        5.5.0-2
smartmontools                      7.3-pve1
zfsutils-linux                     2.2.7-pve2

cat /etc/proxmox-backup/prune.cfg
Code:
datastore: pbs01
        gc-schedule sat 05:00
        notification-mode notification-system
        path /mnt/datastore/pbs01
        tuning gc-atime-safety-check=1

I don't have any pruning tasks configured on the PBS side, but rather it is configured in each job on the proxmox side, I have different retentions for each VM. There are currently 384 Groups and 2138 Snapshots, approximately 1.8TiB of backups are inserted every day
 
Last edited:
Hi,
thanks for sharing the outputs.
First off, from the output you showed in the other thread one can see that there are missing chunks in your datastore
2025-05-12T13:39:01-03:00: warning: unable to access non-existent chunk 780f82546e91812e1b1c04815c2afbf73ad963bc99650502b8c2b379093e3407, required by "/mnt/datastore/pbs002/ns/cloudkvm02/vm/20510/2025-05-12T00:55:08Z/drive-scsi0.img.fidx"<br>2025-05-12T13:39:01-03:00: warning: unable to access non-existent chunk 6a8c3eea53c7a6f96418f0b48f7c47ac342497836d3c3c9b6fe6a5d7826ed353, required by "/mnt/datastore/pbs002/ns/cloudkvm02/vm/20510/2025-05-12T00:55:08Z/drive-scsi0.img.fidx"
So it is strongly recommended to run a verification job, which however may take some time in your case.

Regarding the other log lines:
2025-05-13T01:29:04-03:00: Encountered new index file '"/mnt/datastore/pbs/ns/stor01/vm/20288/2025-05-12T06:00:02Z/drive-scsi0.img.fidx"', increment total index file count<br>2025-05-13T01:37:01-03:00: Encountered new index file '"/mnt/datastore/pbs/ns/stor01/vm/20266/2025-05-12T02:26:25Z/drive-scsi0.img.fidx"', increment total index file count<br>2025-05-13T01:37:46-03:00: Encountered new index file '"/mnt/datastore/pbs/ns/stor01/vm/20266/2025-05-11T02:11:07Z/drive-scsi0.img.fidx"', increment total index file count<br>
These indicate that new backup snapshots where created while the GC was ongoing. Can you please share the retention settings for VMID 20288 and VMID 20266 as well as the VM config, qm config <VMID>?

Also, did you already try to increase the gc-cache-capacity in the datastore tuning options to the maximum value of 8388608 as suggested? Does this have a significant runtime? See https://pbs.proxmox.com/docs/storage.html#tuning
 
Hi @Chris

This message:

Code:
2025-05-12T13:39:01-03:00: warning: unable to access non-existent chunk 780f82546e91812e1b1c04815c2afbf73ad963bc99650502b8c2b379093e3407, required by "/mnt/datastore/pbs002/ns/cloudkvm02/vm/20510/2025-05-12T00:55:08Z/drive-scsi0.img.fidx"<br>2025-05-12T13:39:01-03:00: warning: unable to access non-existent chunk 6a8c3eea53c7a6f96418f0b48f7c47ac342497836d3c3c9b6fe6a5d7826ed353, required by "/mnt/datastore/pbs002/ns/cloudkvm02/vm/20510/2025-05-12T00:55:08Z/drive-scsi0.img.fidx"

It has happened in the past, before doing the update and it was not resolved by performing the verification process, I was only able to resolve it by deleting the entire VM backup, running the GC, and creating the backup again. Does this happen when a VM deletes an old backup due to retention when the GC is running? Because I noticed that this error only occurred in VMs that had short retention, Ex; 1 day

The other log 2025-05-13T01:29:04-03:00: Encountered new index file...I understood that it is a new snapshot found. These VMs have 7 days retention

qm config 20288:
Code:
agent: 1,freeze-fs-on-backup=0
boot: order=scsi0;ide2
cicustom: user=snippets:snippets/userconfig-1959.yaml
ciuser: root
cores: 4
cpu: Haswell-noTSX
ide2: none,media=cdrom
kvm: 1
memory: 8192
meta: creation-qemu=7.0.0,ctime=1674153325
name: example
net0: virtio=BC:24:11:31:EF:A2,bridge=vmbr0,firewall=1,rate=125,tag=158
numa: 0
ostype: l26
scsi0: stor01-vms:vm-20288-disk-0,cache=none,discard=on,format=raw,iops_rd=2000,iops_wr=1000,mbps_rd=300,mbps_wr=200,size=100G,ssd=1
scsi1: stor01-vms:vm-20288-cloudinit,media=cdrom
scsihw: virtio-scsi-pci
searchdomain: aja.to
serial0: socket
smbios1: uuid=a4936b2a-b4b8-42e9-b34e-2e6601876dbc
sockets: 1
vga: std
vmgenid: df26226a-ec54-45fe-8471-c03276efcf0a

qm config 20266
Code:
agent: 1
boot: order=scsi0;ide2
cicustom: user=snippets:snippets/userconfig-1749.yaml
ciuser: root
cores: 2
cpu: Haswell-noTSX
ide0: stor01-vms:vm-20266-cloudinit,media=cdrom
ide2: none,media=cdrom
ipconfig1: ip=dhcp
kvm: 1
memory: 4096
meta: creation-qemu=6.1.0,ctime=1649178087
name: example2
net0: virtio=BC:24:11:7A:DC:0E,bridge=vmbr0,firewall=1,rate=12.5,tag=158
net1: virtio=BC:24:11:C5:32:0F,bridge=vmbr0,mtu=1,rate=125,tag=2017
numa: 0
onboot: 1
ostype: l26
scsi0: stor01-vms:vm-20266-disk-0,cache=none,discard=on,format=raw,iops_rd=2000,iops_wr=1000,mbps_rd=300,mbps_wr=200,size=50G,ssd=1
scsihw: virtio-scsi-pci
searchdomain: aja.to
serial0: socket
smbios1: uuid=d23232fa-c67b-45c1-b543-450f422fd71d
sockets: 1
vga: std
vmgenid: 8bc4b649-92e0-4d96-a4b3-a2663c98544b

I also made the adjustment gc-cache-capacity to 8388608 and I restarted the GC, it's been running for 18 hours and is at 35%. I can't say if it had a positive result, for me it's taking the same time as before. Can you explain the reason for the value 8388608? is it bytes? What does it really mean? Because I didn't find anything about it in the documentation https://pbs.proxmox.com/docs/storage.html#tuning
 
Does this happen when a VM deletes an old backup due to retention when the GC is running? Because I noticed that this error only occurred in VMs that had short retention, Ex; 1 day
Yes, this is the edge case which I was talking about here [0]. For such conditions, long running garbage collection tasks with short retention settings, the GC could previously ignore some chunks, not correctly marking them as in use. This issue has been fixed with [1].
This however means that your garbage collection runtime was shorter before also because some files and chunks have been ignored, which are now included in the GC phase 1.

Can you please post the summary part of the GC run after the update, so one can compare the values given there. Maybe that gives a some more insights. For the previous run you showed
Code:
2025-05-05T11:42:49-03:00: processed 99% (24772420 chunks)
2025-05-05T11:59:49-03:00: Removed garbage: 9.458 TiB
2025-05-05T11:59:49-03:00: Removed chunks: 6149718
2025-05-05T11:59:49-03:00: Pending removals: 2.049 GiB (in 1970 chunks)
2025-05-05T11:59:49-03:00: Original data usage: 542.491 TiB
2025-05-05T11:59:49-03:00: On-Disk usage: 36.812 TiB (6.79%)
2025-05-05T11:59:49-03:00: On-Disk chunks: 18875319
2025-05-05T11:59:49-03:00: Deduplication factor: 14.74
2025-05-05T11:59:49-03:00: Average chunk size: 2.045 MiB

Because I noticed that this error only occurred in VMs that had short retention, Ex; 1 day
How big is the combined size of all of the VMs with a retention setting of less than a day? How much new data is approx written by the backp jobs for these VMs?

Can you explain the reason for the value 8388608? is it bytes? What does it really mean?

The value is the available cache slots to store the digest for the cache to avoid duplicate atime updates, the max. value is obtained as 8*1024*1024, with a default of 1024*1024, which was choosen as trade off between runtime improvements and memory usage, see also [2].

[0] https://forum.proxmox.com/threads/proxmox-backup-server-3-4-released.164869/post-770753
[1] https://git.proxmox.com/?p=proxmox-backup.git;a=commit;h=cb9814e33112f2e4847083a1b742e3126952064b
[2] https://lore.proxmox.com/pbs-devel/20250404130713.376630-1-c.ebner@proxmox.com/
 
This however means that your garbage collection runtime was shorter before also because some files and chunks have been ignored, which are now included in the GC phase 1.
So this update did not bring the improvement in GC execution speed as announced, but rather a fix for the mentioned bug? This is good, but in my environment with HDD storage it brought more slowness

I know that SSD storage would be much better, but it is completely financially unfeasible due to the amount of TiB stored. What I do here is split the backups into more PBS with ZFS and special device on SSD NVME. I won't even update the others for now, until I trust this version 3.4 more. This GC task is really very heavy, it is hard because during the backup execution some reach timeout, due to competition with the GC and the backup fails.

How big is the combined size of all of the VMs with a retention setting of less than a day? How much new data is approx written by the backp jobs for these VMs?
These VMs had around 1TiB and 100GiB of data transferred in the backup

Can you please post the summary part of the GC run after the update, so one can compare the values given there. Maybe that gives a some more insights. For the previous run you showed

When I finish the task I will send the result here, at the moment it has been 22 hours and it is at 41% of phase 1
 
So this update did not bring the improvement in GC execution speed as announced, but rather a fix for the mentioned bug? This is good, but in my environment with HDD storage it brought more slowness
But both is the case, there was the improvements to reduce the number of multiple atime updates on already seen chunks (as keep track by the cache) and the change in iteration logic to better correlate index files, as well as the bugfix for the possibly untouched chunks. Why this has negative effects with your particular setup remains to be clarified, until now the possible cause is only hypothesis.

I know that SSD storage would be much better, but it is completely financially unfeasible due to the amount of TiB stored. What I do here is split the backups into more PBS with ZFS and special device on SSD NVME. I won't even update the others for now, until I trust this version 3.4 more. This GC task is really very heavy, it is hard because during the backup execution some reach timeout, due to competition with the GC and the backup fails.
Yes, since the garbage collection is I/O heavy with respect to metadata access, the recommended special device will help a lot with GC performance.

These VMs had around 1TiB and 100GiB of data transferred in the backup
Given these values, the additionally touched chunks as compared to GC task log previous to the upgrade you showed seem not to be solely responsible for the increased GC runtime. Which was the previously running version? Was there any changes to the underlying storage? Did your backup volume change significantly before and after the upgrade, e.g. did you also perform a lot of upgrades on the VMs, were there changes in retention settings, backup frequency, ecc.?

When I finish the task I will send the result here, at the moment it has been 22 hours and it is at 41% of phase 1
Please try to post the full task log or at least the summary of all the GC jobs after the upgrade, and also further task logs from before the upgrade (maybe the last 5) as well.
 
Last edited:
  • Like
Reactions: Johannes S
I had to go back to version 3.3 of Proxmox Backup Server, as the new version was really unsustainable in my environment. Note that I only reinstalled PBS keeping the same ZFS storage, the GC is back to the way it was. In version 3.4 the first phase ran for 3 days and didn't go beyond 81%, that's when I decided to go back to the previous version because as it's a production environment, I was running out of space. With version 3.3 the first phase ran in 8 hours! That's an absurd difference.
 
This is the summary of the GC execution in version 3.3. Unfortunately in 3.4 I was unable to finish it

Code:
2025-05-19T02:31:07-03:00: Removed garbage: 8.658 TiB
2025-05-19T02:31:07-03:00: Removed chunks: 7140958
2025-05-19T02:31:07-03:00: Pending removals: 1.116 GiB (in 1176 chunks)
2025-05-19T02:31:07-03:00: Original data usage: 721.212 TiB
2025-05-19T02:31:07-03:00: On-Disk usage: 34.436 TiB (4.77%)
2025-05-19T02:31:07-03:00: On-Disk chunks: 18045094
2025-05-19T02:31:07-03:00: Deduplication factor: 20.94
2025-05-19T02:31:07-03:00: Average chunk size: 2.001 MiB
 
Note that I only reinstalled PBS keeping the same ZFS storage, the GC is back to the way it was
Good, that is an important datapoint, as it excludes unrelated changes.
In version 3.4 the first phase ran for 3 days and didn't go beyond 81%, that's when I decided to go back to the previous version because as it's a production environment, I was running out of space. With version 3.3 the first phase ran in 8 hours! That's an absurd difference.
Please append the full task log for the aborted run as well as the full task log for the new run after downgrade to 3.3 for further investigation.

Another thing came to mind: Do you have symlinked parts of your datastore contents?
 
This is the summary of the GC execution in version 3.3. Unfortunately in 3.4 I was unable to finish it
Just to point this out once again, with PBS version 3.3 and your short retention, you are at risk of cleaning up in use chunks. Finding the root cause of the increased GC runtime with version 3.4 is therefore of importance. Maybe you could perform a GC run with the new version, but do not perform any backups while GC is ongoing, in order to see if the newly created snapshots are causing the increased runtime?

Also, please do post the output of find /path/to/datastore/ type -l.
 
Good, that is an important datapoint, as it excludes unrelated changes.
Exactly, there was no change in the storage other than the upgrade to version 3.4, and now the rollback to 3.3.

Please append the full task log for the aborted run as well as the full task log for the new run after downgrade to 3.3 for further investigation.
Unfortunately, when reinstalling to version 3.3, I lost the GC execution logs, and I didn't save them. But basically it was at 81% in phase 1, and of course there were backup executions during this period, and there were several logs like: "Encountered new index file..."

This is the log in version 3.3
Code:
2025-05-17T05:00:00-03:00: starting garbage collection on store pbs001
2025-05-17T05:00:00-03:00: task triggered by schedule 'sat 05:00'
2025-05-17T05:00:00-03:00: Start GC phase1 (mark used chunks)
2025-05-17T05:02:34-03:00: marked 1% (28 of 2705 index files)
2025-05-17T05:03:16-03:00: marked 2% (55 of 2705 index files)
2025-05-17T05:03:47-03:00: marked 3% (82 of 2705 index files)
2025-05-17T05:05:05-03:00: marked 4% (109 of 2705 index files)
2025-05-17T05:06:20-03:00: marked 5% (136 of 2705 index files)
2025-05-17T05:09:33-03:00: marked 6% (163 of 2705 index files)
2025-05-17T05:10:49-03:00: marked 7% (190 of 2705 index files)
2025-05-17T05:12:02-03:00: marked 8% (217 of 2705 index files)
2025-05-17T05:13:52-03:00: marked 9% (244 of 2705 index files)
2025-05-17T05:14:59-03:00: marked 10% (271 of 2705 index files)
2025-05-17T05:16:05-03:00: marked 11% (298 of 2705 index files)
2025-05-17T05:16:48-03:00: marked 12% (325 of 2705 index files)
2025-05-17T05:18:08-03:00: marked 13% (352 of 2705 index files)
2025-05-17T05:19:51-03:00: marked 14% (379 of 2705 index files)
2025-05-17T05:20:16-03:00: marked 15% (406 of 2705 index files)
2025-05-17T05:20:49-03:00: marked 16% (433 of 2705 index files)
2025-05-17T05:21:37-03:00: marked 17% (460 of 2705 index files)
2025-05-17T05:22:06-03:00: marked 18% (487 of 2705 index files)
2025-05-17T05:22:55-03:00: marked 19% (514 of 2705 index files)
2025-05-17T05:23:45-03:00: marked 20% (541 of 2705 index files)
2025-05-17T05:23:56-03:00: marked 21% (569 of 2705 index files)
2025-05-17T05:24:09-03:00: marked 22% (596 of 2705 index files)
2025-05-17T05:24:55-03:00: marked 23% (623 of 2705 index files)
2025-05-17T05:26:24-03:00: marked 24% (650 of 2705 index files)
2025-05-17T05:27:39-03:00: marked 25% (677 of 2705 index files)
2025-05-17T05:28:37-03:00: marked 26% (704 of 2705 index files)
2025-05-17T05:29:09-03:00: marked 27% (731 of 2705 index files)
2025-05-17T05:29:49-03:00: marked 28% (758 of 2705 index files)
2025-05-17T05:30:38-03:00: marked 29% (785 of 2705 index files)
2025-05-17T05:31:08-03:00: marked 30% (812 of 2705 index files)
2025-05-17T05:31:37-03:00: marked 31% (839 of 2705 index files)
2025-05-17T05:31:57-03:00: marked 32% (866 of 2705 index files)
2025-05-17T05:32:18-03:00: marked 33% (893 of 2705 index files)
2025-05-17T05:33:03-03:00: marked 34% (920 of 2705 index files)
2025-05-17T05:33:27-03:00: marked 35% (947 of 2705 index files)
2025-05-17T05:33:48-03:00: marked 36% (974 of 2705 index files)
2025-05-17T05:34:11-03:00: marked 37% (1001 of 2705 index files)
2025-05-17T05:35:23-03:00: marked 38% (1028 of 2705 index files)
2025-05-17T05:36:49-03:00: marked 39% (1055 of 2705 index files)
2025-05-17T05:37:03-03:00: marked 40% (1082 of 2705 index files)
2025-05-17T05:37:46-03:00: marked 41% (1110 of 2705 index files)
2025-05-17T05:39:25-03:00: marked 42% (1137 of 2705 index files)
2025-05-17T05:39:45-03:00: marked 43% (1164 of 2705 index files)
2025-05-17T05:39:55-03:00: marked 44% (1191 of 2705 index files)
2025-05-17T05:40:37-03:00: marked 45% (1218 of 2705 index files)
2025-05-17T05:41:25-03:00: marked 46% (1245 of 2705 index files)
2025-05-17T05:41:42-03:00: marked 47% (1272 of 2705 index files)
2025-05-17T05:42:04-03:00: marked 48% (1299 of 2705 index files)
2025-05-17T05:42:22-03:00: marked 49% (1326 of 2705 index files)
2025-05-17T05:42:48-03:00: marked 50% (1353 of 2705 index files)
2025-05-17T05:43:36-03:00: marked 51% (1380 of 2705 index files)
2025-05-17T05:45:18-03:00: marked 52% (1407 of 2705 index files)
2025-05-17T05:45:39-03:00: marked 53% (1434 of 2705 index files)
2025-05-17T05:47:23-03:00: marked 54% (1461 of 2705 index files)
2025-05-17T05:47:57-03:00: marked 55% (1488 of 2705 index files)
2025-05-17T05:48:12-03:00: marked 56% (1515 of 2705 index files)
2025-05-17T05:49:08-03:00: marked 57% (1542 of 2705 index files)
2025-05-17T05:52:45-03:00: marked 58% (1569 of 2705 index files)
2025-05-17T05:53:15-03:00: marked 59% (1596 of 2705 index files)
2025-05-17T05:55:37-03:00: marked 60% (1623 of 2705 index files)
2025-05-17T05:56:48-03:00: marked 61% (1651 of 2705 index files)
2025-05-17T05:57:20-03:00: marked 62% (1678 of 2705 index files)
2025-05-17T05:59:40-03:00: marked 63% (1705 of 2705 index files)
2025-05-17T06:03:19-03:00: marked 64% (1732 of 2705 index files)
2025-05-17T06:19:02-03:00: marked 65% (1759 of 2705 index files)
2025-05-17T06:22:50-03:00: marked 66% (1786 of 2705 index files)
2025-05-17T06:26:32-03:00: marked 67% (1813 of 2705 index files)
2025-05-17T06:27:08-03:00: marked 68% (1840 of 2705 index files)
2025-05-17T06:28:12-03:00: marked 69% (1867 of 2705 index files)
2025-05-17T06:29:52-03:00: marked 70% (1894 of 2705 index files)
2025-05-17T06:30:42-03:00: marked 71% (1921 of 2705 index files)
2025-05-17T06:31:15-03:00: marked 72% (1948 of 2705 index files)
2025-05-17T06:35:01-03:00: marked 73% (1975 of 2705 index files)
2025-05-17T06:38:33-03:00: marked 74% (2002 of 2705 index files)
2025-05-17T06:40:57-03:00: marked 75% (2029 of 2705 index files)
2025-05-17T06:42:38-03:00: marked 76% (2056 of 2705 index files)
2025-05-17T06:43:43-03:00: marked 77% (2083 of 2705 index files)
2025-05-17T06:47:32-03:00: marked 78% (2110 of 2705 index files)
2025-05-17T06:49:29-03:00: marked 79% (2137 of 2705 index files)
2025-05-17T06:51:31-03:00: marked 80% (2164 of 2705 index files)
2025-05-17T06:53:13-03:00: marked 81% (2192 of 2705 index files)
2025-05-17T06:55:02-03:00: marked 82% (2219 of 2705 index files)
2025-05-17T06:56:55-03:00: marked 83% (2246 of 2705 index files)
2025-05-17T06:58:54-03:00: marked 84% (2273 of 2705 index files)
2025-05-17T07:00:56-03:00: marked 85% (2300 of 2705 index files)
2025-05-17T07:03:02-03:00: marked 86% (2327 of 2705 index files)
2025-05-17T07:04:36-03:00: marked 87% (2354 of 2705 index files)
2025-05-17T07:06:01-03:00: marked 88% (2381 of 2705 index files)
2025-05-17T07:07:28-03:00: marked 89% (2408 of 2705 index files)
2025-05-17T07:08:40-03:00: marked 90% (2435 of 2705 index files)
2025-05-17T07:20:55-03:00: marked 91% (2462 of 2705 index files)
2025-05-17T07:22:43-03:00: marked 92% (2489 of 2705 index files)
2025-05-17T07:26:59-03:00: marked 93% (2516 of 2705 index files)
2025-05-17T07:29:46-03:00: marked 94% (2543 of 2705 index files)
2025-05-17T07:38:08-03:00: marked 95% (2570 of 2705 index files)
2025-05-17T07:49:39-03:00: marked 96% (2597 of 2705 index files)
2025-05-17T07:50:31-03:00: marked 97% (2624 of 2705 index files)
2025-05-17T07:52:42-03:00: marked 98% (2651 of 2705 index files)
2025-05-17T07:57:15-03:00: marked 99% (2678 of 2705 index files)
2025-05-17T07:58:08-03:00: marked 100% (2705 of 2705 index files)
2025-05-17T07:58:08-03:00: Start GC phase2 (sweep unused chunks)
2025-05-17T08:18:15-03:00: processed 1% (247370 chunks)
2025-05-17T08:38:13-03:00: processed 2% (494585 chunks)
2025-05-17T08:58:00-03:00: processed 3% (741474 chunks)
2025-05-17T09:18:56-03:00: processed 4% (988389 chunks)
2025-05-17T09:39:14-03:00: processed 5% (1234794 chunks)
2025-05-17T10:00:52-03:00: processed 6% (1482125 chunks)
2025-05-17T10:21:22-03:00: processed 7% (1729056 chunks)
2025-05-17T10:41:25-03:00: processed 8% (1974885 chunks)
2025-05-17T11:04:16-03:00: processed 9% (2222547 chunks)
2025-05-17T11:24:51-03:00: processed 10% (2469563 chunks)
2025-05-17T11:46:40-03:00: processed 11% (2716414 chunks)
2025-05-17T12:13:54-03:00: processed 12% (2963273 chunks)
2025-05-17T12:36:31-03:00: processed 13% (3210170 chunks)
2025-05-17T12:58:14-03:00: processed 14% (3457780 chunks)
2025-05-17T13:20:14-03:00: processed 15% (3704816 chunks)
2025-05-17T13:41:13-03:00: processed 16% (3952156 chunks)
2025-05-17T14:02:28-03:00: processed 17% (4199749 chunks)
2025-05-17T14:22:32-03:00: processed 18% (4447111 chunks)
2025-05-17T14:43:06-03:00: processed 19% (4694787 chunks)
2025-05-17T15:04:32-03:00: processed 20% (4942178 chunks)
2025-05-17T15:25:07-03:00: processed 21% (5190276 chunks)
2025-05-17T15:45:33-03:00: processed 22% (5438346 chunks)
2025-05-17T16:09:39-03:00: processed 23% (5686360 chunks)
2025-05-17T16:30:48-03:00: processed 24% (5934153 chunks)
2025-05-17T16:51:41-03:00: processed 25% (6180740 chunks)
2025-05-17T17:13:49-03:00: processed 26% (6428207 chunks)
2025-05-17T17:34:23-03:00: processed 27% (6675547 chunks)
2025-05-17T17:55:18-03:00: processed 28% (6923196 chunks)
2025-05-17T18:19:01-03:00: processed 29% (7169918 chunks)
2025-05-17T18:42:07-03:00: processed 30% (7418017 chunks)
2025-05-17T19:16:31-03:00: processed 31% (7665616 chunks)
2025-05-17T19:38:28-03:00: processed 32% (7913246 chunks)
2025-05-17T20:06:41-03:00: processed 33% (8162083 chunks)
2025-05-17T20:35:03-03:00: processed 34% (8411564 chunks)
2025-05-17T21:25:59-03:00: processed 35% (8660285 chunks)
2025-05-17T21:46:56-03:00: processed 36% (8908041 chunks)
2025-05-17T22:25:05-03:00: processed 37% (9157938 chunks)
2025-05-17T22:48:44-03:00: processed 38% (9407322 chunks)
2025-05-17T23:34:33-03:00: processed 39% (9656117 chunks)
2025-05-17T23:58:06-03:00: processed 40% (9905840 chunks)
2025-05-18T01:32:24-03:00: processed 41% (10157139 chunks)
2025-05-18T01:52:03-03:00: processed 42% (10409209 chunks)
2025-05-18T02:19:16-03:00: processed 43% (10661413 chunks)
2025-05-18T02:39:49-03:00: processed 44% (10912939 chunks)
2025-05-18T03:00:08-03:00: processed 45% (11164634 chunks)
2025-05-18T03:44:44-03:00: processed 46% (11417617 chunks)
2025-05-18T04:06:01-03:00: processed 47% (11670773 chunks)
2025-05-18T04:28:59-03:00: processed 48% (11924407 chunks)
2025-05-18T05:13:41-03:00: processed 49% (12176675 chunks)
2025-05-18T05:35:31-03:00: processed 50% (12429411 chunks)
2025-05-18T05:57:02-03:00: processed 51% (12683367 chunks)
2025-05-18T06:35:09-03:00: processed 52% (12937452 chunks)
2025-05-18T06:54:55-03:00: processed 53% (13192350 chunks)
2025-05-18T07:16:08-03:00: processed 54% (13446337 chunks)
2025-05-18T07:38:33-03:00: processed 55% (13700990 chunks)
2025-05-18T08:00:24-03:00: processed 56% (13955609 chunks)
2025-05-18T08:22:27-03:00: processed 57% (14210132 chunks)
2025-05-18T08:44:04-03:00: processed 58% (14464643 chunks)
2025-05-18T09:06:10-03:00: processed 59% (14719004 chunks)
2025-05-18T09:27:40-03:00: processed 60% (14973246 chunks)
2025-05-18T09:48:46-03:00: processed 61% (15227505 chunks)
2025-05-18T10:10:52-03:00: processed 62% (15483047 chunks)
2025-05-18T10:31:29-03:00: processed 63% (15736900 chunks)
2025-05-18T10:53:14-03:00: processed 64% (15992040 chunks)
2025-05-18T11:16:13-03:00: processed 65% (16246164 chunks)
2025-05-18T11:39:33-03:00: processed 66% (16500104 chunks)
2025-05-18T12:05:10-03:00: processed 67% (16754503 chunks)
2025-05-18T12:28:00-03:00: processed 68% (17008663 chunks)
2025-05-18T12:49:38-03:00: processed 69% (17263333 chunks)
2025-05-18T13:11:08-03:00: processed 70% (17517886 chunks)
2025-05-18T13:32:47-03:00: processed 71% (17772934 chunks)
2025-05-18T13:55:13-03:00: processed 72% (18027614 chunks)
2025-05-18T14:18:05-03:00: processed 73% (18282572 chunks)
2025-05-18T14:38:45-03:00: processed 74% (18537711 chunks)
2025-05-18T14:59:52-03:00: processed 75% (18792302 chunks)
2025-05-18T15:22:01-03:00: processed 76% (19047605 chunks)
2025-05-18T15:43:19-03:00: processed 77% (19302451 chunks)
2025-05-18T16:06:32-03:00: processed 78% (19557514 chunks)
2025-05-18T16:26:36-03:00: processed 79% (19811402 chunks)
2025-05-18T16:47:21-03:00: processed 80% (20065905 chunks)
2025-05-18T17:09:15-03:00: processed 81% (20321711 chunks)
2025-05-18T17:30:17-03:00: processed 82% (20577187 chunks)
2025-05-18T17:51:16-03:00: processed 83% (20831881 chunks)
2025-05-18T18:14:08-03:00: processed 84% (21086410 chunks)
2025-05-18T18:37:12-03:00: processed 85% (21341886 chunks)
2025-05-18T18:58:41-03:00: processed 86% (21596066 chunks)
2025-05-18T19:28:02-03:00: processed 87% (21851198 chunks)
2025-05-18T19:51:49-03:00: processed 88% (22105624 chunks)
2025-05-18T20:17:36-03:00: processed 89% (22361674 chunks)
2025-05-18T20:38:36-03:00: processed 90% (22617076 chunks)
2025-05-18T21:09:40-03:00: processed 91% (22873364 chunks)
2025-05-18T21:31:41-03:00: processed 92% (23128935 chunks)
2025-05-18T21:52:57-03:00: processed 93% (23384435 chunks)
2025-05-18T22:31:26-03:00: processed 94% (23638829 chunks)
2025-05-18T22:53:20-03:00: processed 95% (23896261 chunks)
2025-05-18T23:49:08-03:00: processed 96% (24152997 chunks)
2025-05-19T00:54:38-03:00: processed 97% (24411000 chunks)
2025-05-19T01:40:00-03:00: processed 98% (24669746 chunks)
2025-05-19T02:09:44-03:00: processed 99% (24928092 chunks)
2025-05-19T02:31:07-03:00: Removed garbage: 8.658 TiB
2025-05-19T02:31:07-03:00: Removed chunks: 7140958
2025-05-19T02:31:07-03:00: Pending removals: 1.116 GiB (in 1176 chunks)
2025-05-19T02:31:07-03:00: Original data usage: 721.212 TiB
2025-05-19T02:31:07-03:00: On-Disk usage: 34.436 TiB (4.77%)
2025-05-19T02:31:07-03:00: On-Disk chunks: 18045094
2025-05-19T02:31:07-03:00: Deduplication factor: 20.94
2025-05-19T02:31:07-03:00: Average chunk size: 2.001 MiB
2025-05-19T02:31:07-03:00: TASK WARNINGS: 16282

Another thing came to mind: Do you have symlinked parts of your datastore contents?
Also, please do post the output of find /path/to/datastore/ type -l.
I have no symbolic links in the storage, no files are found with the command find /mnt/datastore/pbs001/ -type l
Just to point this out once again, with PBS version 3.3 and your short retention, you are at risk of cleaning up in use chunks. Finding the root cause of the increased GC runtime with version 3.4 is therefore of importance. Maybe you could perform a GC run with the new version, but do not perform any backups while GC is ongoing, in order to see if the newly created snapshots are causing the increased runtime?

I understand, but I can't stop running backup tasks to run GC, see that GC takes between 1 to 2 days depending on the amount of files to delete. Going this period without doing backup is impractical, since the PBS in question is in production. And it has always been this way, we are already "used to it".

Regarding possible problems with deleting in-use chunks, I increased the backup retention, and this problem no longer occurs.

The problem in version 3.4 for me is the delay in phase 1. Note that the task starts on Saturdays at 5am and takes about 3-4 hours. This scheduling occurs precisely in a period when there are no backups running, so we cannot take into account that in version 3.4 phase 1 takes longer due to new snapshots being created, since in version 3.3, the same GC cron takes less time without running any backups.
 
Hey,

we have the same problem after updating to version 3.4 with a very slow phase 1 despite having the maximum number of chunks for cache.
I can create a new thread if needed but wanted to share our experience here.

Here is the last GC run before the update:

Code:
2025-05-19T08:00:00+02:00: starting garbage collection on store pbs-1-zfs
2025-05-19T08:00:00+02:00: task triggered by schedule '08:00'
2025-05-19T08:00:00+02:00: Start GC phase1 (mark used chunks)
2025-05-19T08:00:33+02:00: marked 1% (8 of 745 index files)
2025-05-19T08:02:49+02:00: marked 2% (15 of 745 index files)
2025-05-19T08:02:52+02:00: marked 3% (23 of 745 index files)
2025-05-19T08:11:03+02:00: marked 4% (30 of 745 index files)
2025-05-19T08:14:25+02:00: marked 5% (38 of 745 index files)
2025-05-19T08:14:28+02:00: marked 6% (45 of 745 index files)
2025-05-19T08:16:44+02:00: marked 7% (53 of 745 index files)
2025-05-19T08:16:56+02:00: marked 8% (60 of 745 index files)
2025-05-19T08:17:45+02:00: marked 9% (68 of 745 index files)
2025-05-19T08:18:00+02:00: marked 10% (75 of 745 index files)
2025-05-19T08:18:09+02:00: marked 11% (82 of 745 index files)
2025-05-19T08:19:18+02:00: marked 12% (90 of 745 index files)
2025-05-19T08:19:24+02:00: marked 13% (97 of 745 index files)
2025-05-19T08:20:16+02:00: marked 14% (105 of 745 index files)
2025-05-19T08:20:36+02:00: marked 15% (112 of 745 index files)
2025-05-19T08:20:37+02:00: marked 16% (120 of 745 index files)
2025-05-19T08:20:58+02:00: marked 17% (127 of 745 index files)
2025-05-19T08:21:26+02:00: marked 18% (135 of 745 index files)
2025-05-19T08:21:50+02:00: marked 19% (142 of 745 index files)
2025-05-19T08:22:21+02:00: marked 20% (149 of 745 index files)
2025-05-19T08:22:24+02:00: marked 21% (157 of 745 index files)
2025-05-19T08:22:58+02:00: marked 22% (164 of 745 index files)
2025-05-19T08:23:30+02:00: marked 23% (172 of 745 index files)
2025-05-19T08:23:45+02:00: marked 24% (179 of 745 index files)
2025-05-19T08:24:02+02:00: marked 25% (187 of 745 index files)
2025-05-19T08:24:11+02:00: marked 26% (194 of 745 index files)
2025-05-19T08:24:16+02:00: marked 27% (202 of 745 index files)
2025-05-19T08:24:17+02:00: marked 28% (209 of 745 index files)
2025-05-19T08:24:22+02:00: marked 29% (217 of 745 index files)
2025-05-19T08:24:30+02:00: marked 30% (224 of 745 index files)
2025-05-19T08:24:30+02:00: marked 31% (231 of 745 index files)
2025-05-19T08:24:31+02:00: marked 32% (239 of 745 index files)
2025-05-19T08:24:32+02:00: marked 33% (246 of 745 index files)
2025-05-19T08:24:35+02:00: marked 34% (254 of 745 index files)
2025-05-19T08:24:36+02:00: marked 35% (261 of 745 index files)
2025-05-19T08:24:38+02:00: marked 36% (269 of 745 index files)
2025-05-19T08:24:39+02:00: marked 37% (276 of 745 index files)
2025-05-19T08:24:40+02:00: marked 38% (284 of 745 index files)
2025-05-19T08:24:41+02:00: marked 39% (291 of 745 index files)
2025-05-19T08:24:41+02:00: marked 40% (298 of 745 index files)
2025-05-19T08:24:43+02:00: marked 41% (306 of 745 index files)
2025-05-19T08:24:43+02:00: marked 42% (313 of 745 index files)
2025-05-19T08:24:53+02:00: marked 43% (321 of 745 index files)
2025-05-19T08:24:58+02:00: marked 44% (328 of 745 index files)
2025-05-19T08:25:06+02:00: marked 45% (336 of 745 index files)
2025-05-19T08:25:11+02:00: marked 46% (343 of 745 index files)
2025-05-19T08:25:15+02:00: marked 47% (351 of 745 index files)
2025-05-19T08:25:18+02:00: marked 48% (358 of 745 index files)
2025-05-19T08:25:30+02:00: marked 49% (366 of 745 index files)
2025-05-19T08:25:34+02:00: marked 50% (373 of 745 index files)
2025-05-19T08:25:36+02:00: marked 51% (380 of 745 index files)
2025-05-19T08:25:47+02:00: marked 52% (388 of 745 index files)
2025-05-19T08:26:27+02:00: marked 53% (395 of 745 index files)
2025-05-19T08:26:32+02:00: marked 54% (403 of 745 index files)
2025-05-19T08:28:21+02:00: marked 55% (410 of 745 index files)
2025-05-19T08:28:31+02:00: marked 56% (418 of 745 index files)
2025-05-19T08:28:37+02:00: marked 57% (425 of 745 index files)
2025-05-19T08:29:14+02:00: marked 58% (433 of 745 index files)
2025-05-19T08:30:11+02:00: marked 59% (440 of 745 index files)
2025-05-19T08:30:41+02:00: marked 60% (447 of 745 index files)
2025-05-19T08:31:23+02:00: marked 61% (455 of 745 index files)
2025-05-19T08:35:12+02:00: marked 62% (462 of 745 index files)
2025-05-19T08:35:35+02:00: marked 63% (470 of 745 index files)
2025-05-19T08:37:12+02:00: marked 64% (477 of 745 index files)
2025-05-19T08:40:06+02:00: marked 65% (485 of 745 index files)
2025-05-19T08:40:42+02:00: marked 66% (492 of 745 index files)
2025-05-19T08:43:30+02:00: marked 67% (500 of 745 index files)
2025-05-19T08:44:51+02:00: marked 68% (507 of 745 index files)
2025-05-19T08:47:15+02:00: marked 69% (515 of 745 index files)
2025-05-19T08:50:37+02:00: marked 70% (522 of 745 index files)
2025-05-19T08:52:00+02:00: marked 71% (529 of 745 index files)
2025-05-19T08:52:16+02:00: marked 72% (537 of 745 index files)
2025-05-19T08:52:34+02:00: marked 73% (544 of 745 index files)
2025-05-19T08:52:47+02:00: marked 74% (552 of 745 index files)
2025-05-19T08:52:55+02:00: marked 75% (559 of 745 index files)
2025-05-19T08:54:43+02:00: marked 76% (567 of 745 index files)
2025-05-19T08:55:28+02:00: marked 77% (574 of 745 index files)
2025-05-19T08:55:56+02:00: marked 78% (582 of 745 index files)
2025-05-19T08:56:11+02:00: marked 79% (589 of 745 index files)
2025-05-19T08:56:22+02:00: marked 80% (596 of 745 index files)
2025-05-19T08:56:43+02:00: marked 81% (604 of 745 index files)
2025-05-19T08:57:09+02:00: marked 82% (611 of 745 index files)
2025-05-19T08:57:28+02:00: marked 83% (619 of 745 index files)
2025-05-19T08:57:46+02:00: marked 84% (626 of 745 index files)
2025-05-19T08:58:30+02:00: marked 85% (634 of 745 index files)
2025-05-19T08:58:58+02:00: marked 86% (641 of 745 index files)
2025-05-19T08:59:37+02:00: marked 87% (649 of 745 index files)
2025-05-19T08:59:53+02:00: marked 88% (656 of 745 index files)
2025-05-19T09:00:27+02:00: marked 89% (664 of 745 index files)
2025-05-19T09:02:47+02:00: marked 90% (671 of 745 index files)
2025-05-19T09:04:08+02:00: marked 91% (678 of 745 index files)
2025-05-19T09:05:19+02:00: marked 92% (686 of 745 index files)
2025-05-19T09:05:35+02:00: marked 93% (693 of 745 index files)
2025-05-19T09:05:57+02:00: marked 94% (701 of 745 index files)
2025-05-19T09:06:16+02:00: marked 95% (708 of 745 index files)
2025-05-19T09:06:31+02:00: marked 96% (716 of 745 index files)
2025-05-19T09:06:47+02:00: marked 97% (723 of 745 index files)
2025-05-19T09:07:04+02:00: marked 98% (731 of 745 index files)
2025-05-19T09:07:11+02:00: marked 99% (738 of 745 index files)
2025-05-19T09:07:20+02:00: marked 100% (745 of 745 index files)
2025-05-19T09:07:20+02:00: Start GC phase2 (sweep unused chunks)
2025-05-19T09:07:36+02:00: processed 1% (26952 chunks)
2025-05-19T09:07:47+02:00: processed 2% (54091 chunks)
2025-05-19T09:07:58+02:00: processed 3% (81374 chunks)
2025-05-19T09:08:08+02:00: processed 4% (108217 chunks)
2025-05-19T09:08:16+02:00: processed 5% (135373 chunks)
2025-05-19T09:08:23+02:00: processed 6% (162502 chunks)
2025-05-19T09:08:31+02:00: processed 7% (189527 chunks)
2025-05-19T09:08:37+02:00: processed 8% (216324 chunks)
2025-05-19T09:08:43+02:00: processed 9% (243352 chunks)
2025-05-19T09:08:50+02:00: processed 10% (270519 chunks)
2025-05-19T09:08:58+02:00: processed 11% (297487 chunks)
2025-05-19T09:09:05+02:00: processed 12% (324560 chunks)
2025-05-19T09:09:11+02:00: processed 13% (351469 chunks)
2025-05-19T09:09:18+02:00: processed 14% (378744 chunks)
2025-05-19T09:09:27+02:00: processed 15% (405926 chunks)
2025-05-19T09:09:37+02:00: processed 16% (432953 chunks)
2025-05-19T09:09:45+02:00: processed 17% (459722 chunks)
2025-05-19T09:09:53+02:00: processed 18% (486574 chunks)
2025-05-19T09:10:01+02:00: processed 19% (513525 chunks)
2025-05-19T09:10:12+02:00: processed 20% (540698 chunks)
2025-05-19T09:10:24+02:00: processed 21% (567891 chunks)
2025-05-19T09:10:36+02:00: processed 22% (595017 chunks)
2025-05-19T09:10:48+02:00: processed 23% (622119 chunks)
2025-05-19T09:11:00+02:00: processed 24% (649278 chunks)
2025-05-19T09:11:13+02:00: processed 25% (676307 chunks)
2025-05-19T09:11:26+02:00: processed 26% (703908 chunks)
2025-05-19T09:11:42+02:00: processed 27% (730935 chunks)
2025-05-19T09:11:57+02:00: processed 28% (757829 chunks)
2025-05-19T09:12:13+02:00: processed 29% (784814 chunks)
2025-05-19T09:12:28+02:00: processed 30% (811834 chunks)
2025-05-19T09:12:46+02:00: processed 31% (838923 chunks)
2025-05-19T09:13:04+02:00: processed 32% (865763 chunks)
2025-05-19T09:13:23+02:00: processed 33% (892859 chunks)
2025-05-19T09:13:40+02:00: processed 34% (919720 chunks)
2025-05-19T09:13:58+02:00: processed 35% (946555 chunks)
2025-05-19T09:14:17+02:00: processed 36% (973611 chunks)
2025-05-19T09:14:41+02:00: processed 37% (1001092 chunks)
2025-05-19T09:15:05+02:00: processed 38% (1028161 chunks)
2025-05-19T09:15:27+02:00: processed 39% (1055138 chunks)
2025-05-19T09:15:51+02:00: processed 40% (1082074 chunks)
2025-05-19T09:16:16+02:00: processed 41% (1109383 chunks)
2025-05-19T09:16:39+02:00: processed 42% (1136471 chunks)
2025-05-19T09:17:01+02:00: processed 43% (1163520 chunks)
2025-05-19T09:17:25+02:00: processed 44% (1190706 chunks)
2025-05-19T09:17:49+02:00: processed 45% (1217888 chunks)
2025-05-19T09:18:12+02:00: processed 46% (1244829 chunks)
2025-05-19T09:18:35+02:00: processed 47% (1272019 chunks)
2025-05-19T09:18:58+02:00: processed 48% (1299041 chunks)
2025-05-19T09:19:22+02:00: processed 49% (1326198 chunks)
2025-05-19T09:19:45+02:00: processed 50% (1353103 chunks)
2025-05-19T09:20:08+02:00: processed 51% (1380266 chunks)
2025-05-19T09:20:30+02:00: processed 52% (1406967 chunks)
2025-05-19T09:20:53+02:00: processed 53% (1434060 chunks)
2025-05-19T09:21:17+02:00: processed 54% (1461122 chunks)
2025-05-19T09:21:40+02:00: processed 55% (1488429 chunks)
2025-05-19T09:22:03+02:00: processed 56% (1515462 chunks)
2025-05-19T09:22:24+02:00: processed 57% (1542586 chunks)
2025-05-19T09:22:46+02:00: processed 58% (1570019 chunks)
2025-05-19T09:23:08+02:00: processed 59% (1597097 chunks)
2025-05-19T09:23:29+02:00: processed 60% (1624135 chunks)
2025-05-19T09:23:52+02:00: processed 61% (1651089 chunks)
2025-05-19T09:24:14+02:00: processed 62% (1678113 chunks)
2025-05-19T09:24:36+02:00: processed 63% (1705238 chunks)
2025-05-19T09:24:58+02:00: processed 64% (1732192 chunks)
2025-05-19T09:25:21+02:00: processed 65% (1759440 chunks)
2025-05-19T09:25:42+02:00: processed 66% (1786385 chunks)
2025-05-19T09:26:04+02:00: processed 67% (1813130 chunks)
2025-05-19T09:26:26+02:00: processed 68% (1840214 chunks)
2025-05-19T09:26:49+02:00: processed 69% (1867413 chunks)
2025-05-19T09:27:11+02:00: processed 70% (1894543 chunks)
2025-05-19T09:27:34+02:00: processed 71% (1921448 chunks)
2025-05-19T09:27:55+02:00: processed 72% (1948507 chunks)
2025-05-19T09:28:19+02:00: processed 73% (1975687 chunks)
2025-05-19T09:28:41+02:00: processed 74% (2002486 chunks)
2025-05-19T09:29:05+02:00: processed 75% (2029541 chunks)
2025-05-19T09:29:35+02:00: processed 76% (2056576 chunks)
2025-05-19T09:30:05+02:00: processed 77% (2083673 chunks)
2025-05-19T09:30:35+02:00: processed 78% (2110800 chunks)
2025-05-19T09:31:06+02:00: processed 79% (2137765 chunks)
2025-05-19T09:31:43+02:00: processed 80% (2165049 chunks)
2025-05-19T09:32:19+02:00: processed 81% (2192145 chunks)
2025-05-19T09:32:57+02:00: processed 82% (2219113 chunks)
2025-05-19T09:33:33+02:00: processed 83% (2246101 chunks)
2025-05-19T09:34:11+02:00: processed 84% (2273448 chunks)
2025-05-19T09:34:50+02:00: processed 85% (2300423 chunks)
2025-05-19T09:35:28+02:00: processed 86% (2327168 chunks)
2025-05-19T09:36:08+02:00: processed 87% (2354231 chunks)
2025-05-19T09:36:47+02:00: processed 88% (2381013 chunks)
2025-05-19T09:37:25+02:00: processed 89% (2408113 chunks)
2025-05-19T09:38:03+02:00: processed 90% (2434958 chunks)
2025-05-19T09:38:42+02:00: processed 91% (2462138 chunks)
2025-05-19T09:39:20+02:00: processed 92% (2488924 chunks)
2025-05-19T09:39:58+02:00: processed 93% (2516141 chunks)
2025-05-19T09:40:37+02:00: processed 94% (2543379 chunks)
2025-05-19T09:41:15+02:00: processed 95% (2570434 chunks)
2025-05-19T09:41:52+02:00: processed 96% (2597387 chunks)
2025-05-19T09:42:30+02:00: processed 97% (2624390 chunks)
2025-05-19T09:43:07+02:00: processed 98% (2651469 chunks)
2025-05-19T09:43:45+02:00: processed 99% (2678593 chunks)
2025-05-19T09:44:23+02:00: Removed garbage: 12.947 GiB
2025-05-19T09:44:23+02:00: Removed chunks: 6119
2025-05-19T09:44:23+02:00: Pending removals: 18.168 GiB (in 8186 chunks)
2025-05-19T09:44:23+02:00: Original data usage: 51.139 TiB
2025-05-19T09:44:23+02:00: On-Disk usage: 4.769 TiB (9.33%)
2025-05-19T09:44:23+02:00: On-Disk chunks: 2691535
2025-05-19T09:44:23+02:00: Deduplication factor: 10.72
2025-05-19T09:44:23+02:00: Average chunk size: 1.858 MiB
2025-05-19T09:44:23+02:00: TASK OK

I will post the run after the update in the next post.
 
Here is the run after the update. I started this one manually after increasing the cache capacity.

Code:
2025-05-20T10:17:40+02:00: starting garbage collection on store pbs-1-zfs
2025-05-20T10:17:41+02:00: Access time update check successful, proceeding with GC.
2025-05-20T10:17:41+02:00: Using access time cutoff 1d 5m, minimum access time is 2025-05-19T08:12:40Z
2025-05-20T10:17:41+02:00: Using chunk digest cache capacity of 8388608.
2025-05-20T10:17:41+02:00: Start GC phase1 (mark used chunks)
2025-05-20T10:17:49+02:00: marked 1% (9 of 810 index files)
2025-05-20T10:17:51+02:00: marked 2% (17 of 810 index files)
2025-05-20T10:17:52+02:00: marked 3% (25 of 810 index files)
2025-05-20T10:17:52+02:00: marked 4% (33 of 810 index files)
2025-05-20T10:17:53+02:00: marked 5% (41 of 810 index files)
2025-05-20T10:17:53+02:00: marked 6% (49 of 810 index files)
2025-05-20T10:17:54+02:00: marked 7% (57 of 810 index files)
2025-05-20T10:17:54+02:00: marked 8% (65 of 810 index files)
2025-05-20T10:17:55+02:00: marked 9% (73 of 810 index files)
2025-05-20T10:17:55+02:00: marked 10% (81 of 810 index files)
2025-05-20T10:17:55+02:00: marked 11% (90 of 810 index files)
2025-05-20T10:17:55+02:00: marked 12% (98 of 810 index files)
2025-05-20T10:17:55+02:00: marked 13% (106 of 810 index files)
2025-05-20T10:17:55+02:00: marked 14% (114 of 810 index files)
2025-05-20T10:17:56+02:00: marked 15% (122 of 810 index files)
2025-05-20T10:17:56+02:00: marked 16% (130 of 810 index files)
2025-05-20T10:17:56+02:00: marked 17% (138 of 810 index files)
2025-05-20T10:17:56+02:00: marked 18% (146 of 810 index files)
2025-05-20T10:24:24+02:00: marked 19% (154 of 810 index files)
2025-05-20T10:26:44+02:00: marked 20% (162 of 810 index files)
2025-05-20T10:31:49+02:00: marked 21% (171 of 810 index files)
2025-05-20T10:33:47+02:00: marked 22% (179 of 810 index files)
2025-05-20T10:37:19+02:00: marked 23% (187 of 810 index files)
2025-05-20T10:47:13+02:00: marked 24% (195 of 810 index files)
2025-05-20T10:55:13+02:00: marked 25% (203 of 810 index files)
2025-05-20T11:06:46+02:00: marked 26% (211 of 810 index files)
2025-05-20T11:22:47+02:00: marked 27% (219 of 810 index files)
2025-05-20T11:29:19+02:00: marked 28% (227 of 810 index files)
2025-05-20T11:33:57+02:00: marked 29% (235 of 810 index files)
2025-05-20T11:38:19+02:00: marked 30% (243 of 810 index files)
2025-05-20T11:44:21+02:00: marked 31% (252 of 810 index files)
2025-05-20T11:47:10+02:00: marked 32% (260 of 810 index files)
2025-05-20T11:47:11+02:00: marked 33% (268 of 810 index files)
2025-05-20T11:47:11+02:00: marked 34% (276 of 810 index files)
2025-05-20T11:47:11+02:00: marked 35% (284 of 810 index files)
2025-05-20T11:47:11+02:00: marked 36% (292 of 810 index files)
2025-05-20T11:47:12+02:00: marked 37% (300 of 810 index files)
2025-05-20T11:47:12+02:00: marked 38% (308 of 810 index files)
2025-05-20T11:47:12+02:00: marked 39% (316 of 810 index files)
2025-05-20T11:49:59+02:00: marked 40% (324 of 810 index files)
2025-05-20T11:52:38+02:00: marked 41% (333 of 810 index files)
2025-05-20T11:54:21+02:00: marked 42% (341 of 810 index files)
2025-05-20T12:04:08+02:00: marked 43% (349 of 810 index files)
2025-05-20T12:05:15+02:00: marked 44% (357 of 810 index files)
2025-05-20T12:09:14+02:00: marked 45% (365 of 810 index files)
2025-05-20T12:10:32+02:00: marked 46% (373 of 810 index files)
2025-05-20T12:11:57+02:00: marked 47% (381 of 810 index files)
2025-05-20T12:19:29+02:00: marked 48% (389 of 810 index files)
2025-05-20T12:38:51+02:00: marked 49% (397 of 810 index files)
2025-05-20T12:41:37+02:00: marked 50% (405 of 810 index files)
2025-05-20T12:45:06+02:00: marked 51% (414 of 810 index files)
2025-05-20T12:54:34+02:00: marked 52% (422 of 810 index files)
2025-05-20T13:05:37+02:00: marked 53% (430 of 810 index files)
2025-05-20T13:09:17+02:00: marked 54% (438 of 810 index files)
2025-05-20T13:28:41+02:00: marked 55% (446 of 810 index files)
2025-05-20T13:30:29+02:00: marked 56% (454 of 810 index files)
2025-05-20T13:31:59+02:00: marked 57% (462 of 810 index files)
2025-05-20T13:34:15+02:00: marked 58% (470 of 810 index files)
2025-05-20T13:36:46+02:00: marked 59% (478 of 810 index files)
2025-05-20T13:37:49+02:00: marked 60% (486 of 810 index files)
2025-05-20T13:42:37+02:00: marked 61% (495 of 810 index files)
2025-05-20T13:49:02+02:00: marked 62% (503 of 810 index files)
2025-05-20T13:56:33+02:00: marked 63% (511 of 810 index files)
2025-05-20T13:59:13+02:00: marked 64% (519 of 810 index files)
2025-05-20T14:00:25+02:00: marked 65% (527 of 810 index files)
2025-05-20T14:03:01+02:00: marked 66% (535 of 810 index files)
2025-05-20T14:09:29+02:00: marked 67% (543 of 810 index files)
2025-05-20T14:21:57+02:00: marked 68% (551 of 810 index files)
2025-05-20T14:28:28+02:00: marked 69% (559 of 810 index files)
2025-05-20T14:58:21+02:00: marked 70% (567 of 810 index files)
2025-05-20T15:37:38+02:00: marked 71% (576 of 810 index files)
2025-05-20T15:59:34+02:00: marked 72% (584 of 810 index files)
2025-05-20T15:59:35+02:00: marked 73% (592 of 810 index files)
2025-05-20T16:02:40+02:00: marked 74% (600 of 810 index files)
2025-05-20T16:04:33+02:00: marked 75% (608 of 810 index files)
2025-05-20T16:20:55+02:00: marked 76% (616 of 810 index files)
2025-05-20T16:43:24+02:00: marked 77% (624 of 810 index files)
2025-05-20T16:54:29+02:00: marked 78% (632 of 810 index files)
2025-05-20T16:55:23+02:00: marked 79% (640 of 810 index files)
2025-05-20T16:58:31+02:00: marked 80% (648 of 810 index files)
2025-05-20T17:05:01+02:00: marked 81% (657 of 810 index files)
2025-05-20T17:07:45+02:00: marked 82% (665 of 810 index files)
2025-05-20T17:09:02+02:00: marked 83% (673 of 810 index files)
2025-05-20T17:13:42+02:00: marked 84% (681 of 810 index files)
2025-05-20T17:19:43+02:00: marked 85% (689 of 810 index files)
2025-05-20T17:21:26+02:00: marked 86% (697 of 810 index files)
2025-05-20T17:23:33+02:00: marked 87% (705 of 810 index files)
2025-05-20T17:30:44+02:00: marked 88% (713 of 810 index files)
2025-05-20T17:39:20+02:00: marked 89% (721 of 810 index files)
2025-05-20T17:58:53+02:00: marked 90% (729 of 810 index files)
2025-05-20T18:13:11+02:00: marked 91% (738 of 810 index files)
2025-05-20T18:16:12+02:00: Encountered new index file '"/mnt/datastore/pbs-1-zfs/ns/pve-5/vm/511/2025-05-20T16:09:39Z/drive-virtio0.img.fidx"', increment total index file count
2025-05-20T18:16:12+02:00: Encountered new index file '"/mnt/datastore/pbs-1-zfs/ns/pve-5/vm/511/2025-05-20T16:09:39Z/drive-efidisk0.img.fidx"', increment total index file count
2025-05-20T18:16:12+02:00: marked 92% (748 of 812 index files)
2025-05-20T18:16:36+02:00: marked 93% (756 of 812 index files)
2025-05-20T18:17:27+02:00: Encountered new index file '"/mnt/datastore/pbs-1-zfs/ns/pve-5/vm/7025/2025-05-20T16:10:20Z/drive-scsi0.img.fidx"', increment total index file count
2025-05-20T18:17:43+02:00: marked 94% (765 of 813 index files)
2025-05-20T18:17:57+02:00: Encountered new index file '"/mnt/datastore/pbs-1-zfs/ns/pve-5/vm/510/2025-05-20T16:08:05Z/drive-efidisk0.img.fidx"', increment total index file count
2025-05-20T18:18:18+02:00: Encountered new index file '"/mnt/datastore/pbs-1-zfs/ns/pve-5/vm/510/2025-05-20T16:08:05Z/drive-virtio0.img.fidx"', increment total index file count
2025-05-20T18:18:28+02:00: marked 95% (775 of 815 index files)
2025-05-20T18:18:48+02:00: marked 96% (783 of 815 index files)
2025-05-20T18:19:05+02:00: Encountered new index file '"/mnt/datastore/pbs-1-zfs/ns/pve-5/vm/7113/2025-05-20T16:12:35Z/drive-scsi0.img.fidx"', increment total index file count
2025-05-20T18:19:07+02:00: marked 97% (792 of 816 index files)
2025-05-20T18:19:11+02:00: marked 98% (800 of 816 index files)
2025-05-20T18:19:30+02:00: marked 99% (808 of 816 index files)
2025-05-20T18:19:57+02:00: marked 100% (816 of 816 index files)
2025-05-20T18:19:58+02:00: Start GC phase2 (sweep unused chunks)
2025-05-20T18:20:21+02:00: processed 1% (28724 chunks)
2025-05-20T18:20:39+02:00: processed 2% (57649 chunks)
2025-05-20T18:20:57+02:00: processed 3% (86672 chunks)
2025-05-20T18:21:12+02:00: processed 4% (115291 chunks)
2025-05-20T18:21:25+02:00: processed 5% (144266 chunks)
2025-05-20T18:21:38+02:00: processed 6% (173146 chunks)
2025-05-20T18:21:49+02:00: processed 7% (201931 chunks)
2025-05-20T18:22:00+02:00: processed 8% (230531 chunks)
2025-05-20T18:22:10+02:00: processed 9% (259305 chunks)
2025-05-20T18:22:20+02:00: processed 10% (288181 chunks)
2025-05-20T18:22:29+02:00: processed 11% (316950 chunks)
2025-05-20T18:22:39+02:00: processed 12% (345777 chunks)
2025-05-20T18:22:47+02:00: processed 13% (374393 chunks)
2025-05-20T18:22:55+02:00: processed 14% (403402 chunks)
2025-05-20T18:23:03+02:00: processed 15% (432318 chunks)
2025-05-20T18:23:10+02:00: processed 16% (461044 chunks)
2025-05-20T18:23:17+02:00: processed 17% (489592 chunks)
2025-05-20T18:23:24+02:00: processed 18% (518235 chunks)
2025-05-20T18:23:31+02:00: processed 19% (546929 chunks)
2025-05-20T18:23:38+02:00: processed 20% (575880 chunks)
2025-05-20T18:23:44+02:00: processed 21% (604791 chunks)
2025-05-20T18:23:51+02:00: processed 22% (633650 chunks)
2025-05-20T18:23:57+02:00: processed 23% (662519 chunks)
2025-05-20T18:24:04+02:00: processed 24% (691443 chunks)
2025-05-20T18:24:10+02:00: processed 25% (720196 chunks)
2025-05-20T18:24:16+02:00: processed 26% (749539 chunks)
2025-05-20T18:24:24+02:00: processed 27% (778245 chunks)
2025-05-20T18:24:35+02:00: processed 28% (806958 chunks)
2025-05-20T18:24:50+02:00: processed 29% (835739 chunks)
2025-05-20T18:25:05+02:00: processed 30% (864508 chunks)
2025-05-20T18:25:21+02:00: processed 31% (893216 chunks)
2025-05-20T18:25:36+02:00: processed 32% (921788 chunks)
2025-05-20T18:25:50+02:00: processed 33% (950672 chunks)
2025-05-20T18:26:07+02:00: processed 34% (979285 chunks)
2025-05-20T18:26:28+02:00: processed 35% (1007818 chunks)
2025-05-20T18:26:49+02:00: processed 36% (1036614 chunks)
2025-05-20T18:27:12+02:00: processed 37% (1065878 chunks)
2025-05-20T18:27:34+02:00: processed 38% (1094673 chunks)
2025-05-20T18:27:56+02:00: processed 39% (1123404 chunks)
2025-05-20T18:28:21+02:00: processed 40% (1152119 chunks)
2025-05-20T18:28:48+02:00: processed 41% (1181143 chunks)
2025-05-20T18:29:17+02:00: processed 42% (1209949 chunks)
2025-05-20T18:29:45+02:00: processed 43% (1238758 chunks)
2025-05-20T18:30:13+02:00: processed 44% (1267641 chunks)
2025-05-20T18:30:43+02:00: processed 45% (1296626 chunks)
2025-05-20T18:31:18+02:00: processed 46% (1325346 chunks)
2025-05-20T18:31:56+02:00: processed 47% (1354252 chunks)
2025-05-20T18:32:33+02:00: processed 48% (1383065 chunks)
2025-05-20T18:33:11+02:00: processed 49% (1411932 chunks)
2025-05-20T18:33:49+02:00: processed 50% (1440550 chunks)
2025-05-20T18:34:26+02:00: processed 51% (1469469 chunks)
2025-05-20T18:35:04+02:00: processed 52% (1497853 chunks)
2025-05-20T18:35:43+02:00: processed 53% (1526711 chunks)
2025-05-20T18:36:22+02:00: processed 54% (1555542 chunks)
2025-05-20T18:37:01+02:00: processed 55% (1584509 chunks)
2025-05-20T18:37:39+02:00: processed 56% (1613245 chunks)
2025-05-20T18:38:18+02:00: processed 57% (1642089 chunks)
2025-05-20T18:38:57+02:00: processed 58% (1671333 chunks)
2025-05-20T18:39:36+02:00: processed 59% (1700123 chunks)
2025-05-20T18:40:15+02:00: processed 60% (1728882 chunks)
2025-05-20T18:40:53+02:00: processed 61% (1757606 chunks)
2025-05-20T18:41:32+02:00: processed 62% (1786340 chunks)
2025-05-20T18:42:12+02:00: processed 63% (1815270 chunks)
2025-05-20T18:42:52+02:00: processed 64% (1843997 chunks)
2025-05-20T18:43:31+02:00: processed 65% (1872984 chunks)
2025-05-20T18:44:09+02:00: processed 66% (1901741 chunks)
2025-05-20T18:44:48+02:00: processed 67% (1930238 chunks)
2025-05-20T18:45:27+02:00: processed 68% (1959120 chunks)
2025-05-20T18:46:07+02:00: processed 69% (1988077 chunks)
2025-05-20T18:46:46+02:00: processed 70% (2016946 chunks)
2025-05-20T18:47:26+02:00: processed 71% (2045563 chunks)
2025-05-20T18:48:06+02:00: processed 72% (2074333 chunks)
2025-05-20T18:48:46+02:00: processed 73% (2103279 chunks)
2025-05-20T18:49:25+02:00: processed 74% (2131802 chunks)
2025-05-20T18:50:06+02:00: processed 75% (2160560 chunks)
2025-05-20T18:50:46+02:00: processed 76% (2189352 chunks)
2025-05-20T18:51:27+02:00: processed 77% (2218273 chunks)
2025-05-20T18:52:07+02:00: processed 78% (2247123 chunks)
2025-05-20T18:52:46+02:00: processed 79% (2275836 chunks)
2025-05-20T18:53:27+02:00: processed 80% (2304883 chunks)
2025-05-20T18:54:08+02:00: processed 81% (2333700 chunks)
2025-05-20T18:54:49+02:00: processed 82% (2362344 chunks)
2025-05-20T18:55:30+02:00: processed 83% (2391070 chunks)
2025-05-20T18:56:11+02:00: processed 84% (2420095 chunks)
2025-05-20T18:56:52+02:00: processed 85% (2448824 chunks)
2025-05-20T18:57:33+02:00: processed 86% (2477328 chunks)
2025-05-20T18:58:14+02:00: processed 87% (2506161 chunks)
2025-05-20T18:58:55+02:00: processed 88% (2534724 chunks)
2025-05-20T18:59:36+02:00: processed 89% (2563539 chunks)
2025-05-20T19:00:17+02:00: processed 90% (2592055 chunks)
2025-05-20T19:00:57+02:00: processed 91% (2620993 chunks)
2025-05-20T19:01:37+02:00: processed 92% (2649524 chunks)
2025-05-20T19:02:18+02:00: processed 93% (2678419 chunks)
2025-05-20T19:03:00+02:00: processed 94% (2707362 chunks)
2025-05-20T19:03:41+02:00: processed 95% (2736131 chunks)
2025-05-20T19:04:21+02:00: processed 96% (2764843 chunks)
2025-05-20T19:05:02+02:00: processed 97% (2793584 chunks)
2025-05-20T19:05:44+02:00: processed 98% (2822376 chunks)
2025-05-20T19:06:24+02:00: processed 99% (2851264 chunks)
2025-05-20T19:07:05+02:00: Removed garbage: 37.555 GiB
2025-05-20T19:07:05+02:00: Removed chunks: 16962
2025-05-20T19:07:05+02:00: Original data usage: 57.09 TiB
2025-05-20T19:07:05+02:00: On-Disk usage: 5.075 TiB (8.89%)
2025-05-20T19:07:05+02:00: On-Disk chunks: 2863296
2025-05-20T19:07:05+02:00: Deduplication factor: 11.25
2025-05-20T19:07:05+02:00: Average chunk size: 1.859 MiB
2025-05-20T19:07:05+02:00: TASK OK

This is quite the difference. As the datastore is not on full capacity yet, the problem will get bigger with time when the datastore gets fuller.
 
Hi,
thanks for sharing the task logs! That is quite the difference you observe. Please do also provide some more details about the storage, so we can try to reproduce the issue, thanks.
 
The PBS is a physical machine with 4x 8TB HDD as a ZFD Raidz1. The current usage is 26%.
We have a namespace for each PVE. The backups all run in the night and early morning, so when the Prune and GC Runs there is no active backup.

If specific informations are needed, im happy to provide them as best as i can.
 
Could you try to trace the syscalls during garbage collection by running strace -ftt -e %file,$desc -p $(pidof proxmox-backup-proxy) -o /path/to/output.txt, making sure to point the output to a location with enough space, as the file will grow quickly and compress and share this file by some means? This might help investigate.

Also, did you check your memory usage and have a look out for swap usage during the GC run?

Edit: Also, please share the output of zpool get all <poolname> and zfs get all <dataset>
 
Last edited:
Also, do you observe a different runtime behavior when setting the gc-cache-capacity to 0?
 
Here is a 1 hour run of the syscalls command:
https://filebin.net/425zt4tnwzvz6t2r

The memory is at ~10 out of 16GB with 0,43% SWAP usage with the GC actively running in phase 1.

I testet the capacity at 0 with a second PBS (which doesnt have a lot of backups). Here the GC run before the update was ~5 Mins. After the update it is ~25 Mins both with capacity to 0 and a higher value. I can of course test it with the main PBS, but i would do it tomorrow after the current GC is complete.

Code:
NAME       PROPERTY                       VALUE                          SOURCE
pbs-1-zfs  size                           29.1T                          -
pbs-1-zfs  capacity                       25%                            -
pbs-1-zfs  altroot                        -                              default
pbs-1-zfs  health                         ONLINE                         -
pbs-1-zfs  guid                           622809827571632012             -
pbs-1-zfs  version                        -                              default
pbs-1-zfs  bootfs                         -                              default
pbs-1-zfs  delegation                     on                             default
pbs-1-zfs  autoreplace                    off                            default
pbs-1-zfs  cachefile                      none                           local
pbs-1-zfs  failmode                       wait                           default
pbs-1-zfs  listsnapshots                  off                            default
pbs-1-zfs  autoexpand                     off                            default
pbs-1-zfs  dedupratio                     1.00x                          -
pbs-1-zfs  free                           21.6T                          -
pbs-1-zfs  allocated                      7.49T                          -
pbs-1-zfs  readonly                       off                            -
pbs-1-zfs  ashift                         12                             local
pbs-1-zfs  comment                        -                              default
pbs-1-zfs  expandsize                     -                              -
pbs-1-zfs  freeing                        0                              -
pbs-1-zfs  fragmentation                  32%                            -
pbs-1-zfs  leaked                         0                              -
pbs-1-zfs  multihost                      off                            default
pbs-1-zfs  checkpoint                     -                              -
pbs-1-zfs  load_guid                      6004039897533648236            -
pbs-1-zfs  autotrim                       off                            default
pbs-1-zfs  compatibility                  off                            default
pbs-1-zfs  bcloneused                     0                              -
pbs-1-zfs  bclonesaved                    0                              -
pbs-1-zfs  bcloneratio                    1.00x                          -
pbs-1-zfs  feature@async_destroy          enabled                        local
pbs-1-zfs  feature@empty_bpobj            enabled                        local
pbs-1-zfs  feature@lz4_compress           active                         local
pbs-1-zfs  feature@multi_vdev_crash_dump  enabled                        local
pbs-1-zfs  feature@spacemap_histogram     active                         local
pbs-1-zfs  feature@enabled_txg            active                         local
pbs-1-zfs  feature@hole_birth             active                         local
pbs-1-zfs  feature@extensible_dataset     active                         local
pbs-1-zfs  feature@embedded_data          active                         local
pbs-1-zfs  feature@bookmarks              enabled                        local
pbs-1-zfs  feature@filesystem_limits      enabled                        local
pbs-1-zfs  feature@large_blocks           enabled                        local
pbs-1-zfs  feature@large_dnode            enabled                        local
pbs-1-zfs  feature@sha512                 enabled                        local
pbs-1-zfs  feature@skein                  enabled                        local
pbs-1-zfs  feature@edonr                  enabled                        local
pbs-1-zfs  feature@userobj_accounting     active                         local
pbs-1-zfs  feature@encryption             enabled                        local
pbs-1-zfs  feature@project_quota          active                         local
pbs-1-zfs  feature@device_removal         enabled                        local
pbs-1-zfs  feature@obsolete_counts        enabled                        local
pbs-1-zfs  feature@zpool_checkpoint       enabled                        local
pbs-1-zfs  feature@spacemap_v2            active                         local
pbs-1-zfs  feature@allocation_classes     enabled                        local
pbs-1-zfs  feature@resilver_defer         enabled                        local
pbs-1-zfs  feature@bookmark_v2            enabled                        local
pbs-1-zfs  feature@redaction_bookmarks    enabled                        local
pbs-1-zfs  feature@redacted_datasets      enabled                        local
pbs-1-zfs  feature@bookmark_written       enabled                        local
pbs-1-zfs  feature@log_spacemap           active                         local
pbs-1-zfs  feature@livelist               enabled                        local
pbs-1-zfs  feature@device_rebuild         enabled                        local
pbs-1-zfs  feature@zstd_compress          enabled                        local
pbs-1-zfs  feature@draid                  enabled                        local
pbs-1-zfs  feature@zilsaxattr             active                         local
pbs-1-zfs  feature@head_errlog            active                         local
pbs-1-zfs  feature@blake3                 enabled                        local
pbs-1-zfs  feature@block_cloning          enabled                        local
pbs-1-zfs  feature@vdev_zaps_v2           active                         local

Code:
NAME       PROPERTY              VALUE                     SOURCE
pbs-1-zfs  type                  filesystem                -
pbs-1-zfs  creation              Wed Mar 13 11:18 2024     -
pbs-1-zfs  used                  5.44T                     -
pbs-1-zfs  available             15.3T                     -
pbs-1-zfs  referenced            5.44T                     -
pbs-1-zfs  compressratio         1.02x                     -
pbs-1-zfs  mounted               yes                       -
pbs-1-zfs  quota                 20.7T                     local
pbs-1-zfs  reservation           none                      default
pbs-1-zfs  recordsize            128K                      default
pbs-1-zfs  mountpoint            /mnt/datastore/pbs-1-zfs  local
pbs-1-zfs  sharenfs              off                       default
pbs-1-zfs  checksum              on                        default
pbs-1-zfs  compression           lz4                       local
pbs-1-zfs  atime                 on                        default
pbs-1-zfs  devices               on                        default
pbs-1-zfs  exec                  on                        default
pbs-1-zfs  setuid                on                        default
pbs-1-zfs  readonly              off                       default
pbs-1-zfs  zoned                 off                       default
pbs-1-zfs  snapdir               hidden                    default
pbs-1-zfs  aclmode               discard                   default
pbs-1-zfs  aclinherit            restricted                default
pbs-1-zfs  createtxg             1                         -
pbs-1-zfs  canmount              on                        default
pbs-1-zfs  xattr                 on                        default
pbs-1-zfs  copies                1                         default
pbs-1-zfs  version               5                         -
pbs-1-zfs  utf8only              off                       -
pbs-1-zfs  normalization         none                      -
pbs-1-zfs  casesensitivity       sensitive                 -
pbs-1-zfs  vscan                 off                       default
pbs-1-zfs  nbmand                off                       default
pbs-1-zfs  sharesmb              off                       default
pbs-1-zfs  refquota              none                      default
pbs-1-zfs  refreservation        none                      default
pbs-1-zfs  guid                  3458168430298756986       -
pbs-1-zfs  primarycache          all                       default
pbs-1-zfs  secondarycache        all                       default
pbs-1-zfs  usedbysnapshots       0B                        -
pbs-1-zfs  usedbydataset         5.44T                     -
pbs-1-zfs  usedbychildren        2.05G                     -
pbs-1-zfs  usedbyrefreservation  0B                        -
pbs-1-zfs  logbias               latency                   default
pbs-1-zfs  objsetid              54                        -
pbs-1-zfs  dedup                 off                       default
pbs-1-zfs  mlslabel              none                      default
pbs-1-zfs  sync                  standard                  default
pbs-1-zfs  dnodesize             legacy                    default
pbs-1-zfs  refcompressratio      1.02x                     -
pbs-1-zfs  written               5.44T                     -
pbs-1-zfs  logicalused           5.54T                     -
pbs-1-zfs  logicalreferenced     5.54T                     -
pbs-1-zfs  volmode               default                   default
pbs-1-zfs  filesystem_limit      none                      default
pbs-1-zfs  snapshot_limit        none                      default
pbs-1-zfs  filesystem_count      none                      default
pbs-1-zfs  snapshot_count        none                      default
pbs-1-zfs  snapdev               hidden                    default
pbs-1-zfs  acltype               off                       default
pbs-1-zfs  context               none                      default
pbs-1-zfs  fscontext             none                      default
pbs-1-zfs  defcontext            none                      default
pbs-1-zfs  rootcontext           none                      default
pbs-1-zfs  relatime              on                        local
pbs-1-zfs  redundant_metadata    all                       default
pbs-1-zfs  overlay               on                        default
pbs-1-zfs  encryption            off                       default
pbs-1-zfs  keylocation           none                      default
pbs-1-zfs  keyformat             none                      default
pbs-1-zfs  pbkdf2iters           0                         default
pbs-1-zfs  special_small_blocks  0                         default
pbs-1-zfs  prefetch              all                       default
 
Thanks for the traces and the outputs.
I testet the capacity at 0 with a second PBS (which doesnt have a lot of backups). Here the GC run before the update was ~5 Mins. After the update it is ~25 Mins
Could you generate an strace output for the complete GC for that PBS datastore? Smaller (size and runtime) is better in this case, might be more easy to spot the culprit.

Edit: please do tune the cache capacity to the default value before generating the trace to reduce output by atime updates.
 
Last edited: