Hi,
I am running a PBS on one PVE/Ceph node where all OSDs are 3.4 TiB WD REDs. This backup pool has become rather full and and I wonder if this is the reason, that GC runs for days. There is almost no CPU or storage I/O load on the system, but quite a number of snapshots from my PVE cluster:
CT: 1 Groups, 25 Snapshots
Host :1 Groups, 0 Snapshots
VM: 141 Groups, 3476 Snapshots
Storage pool Usage: 94.54% (17.78 TiB of 18.81 TiB)
Deduplication Factor: 31.66
The GC job has been running for 18+ hours and is still only at:
There's nothing else going on on this pool, its only dedicated to PBS. The backup volume is, as stated above on a CEPH pool, which consists of 3 x 6 x 3.4TiB WD REDs, so I expected no latency issues from that direction. I am currently a bit at a loss, as of what I could do to get the performance up and any ideas are greatly appreciated.
Thanks,
budy
I am running a PBS on one PVE/Ceph node where all OSDs are 3.4 TiB WD REDs. This backup pool has become rather full and and I wonder if this is the reason, that GC runs for days. There is almost no CPU or storage I/O load on the system, but quite a number of snapshots from my PVE cluster:
CT: 1 Groups, 25 Snapshots
Host :1 Groups, 0 Snapshots
VM: 141 Groups, 3476 Snapshots
Storage pool Usage: 94.54% (17.78 TiB of 18.81 TiB)
Deduplication Factor: 31.66
The GC job has been running for 18+ hours and is still only at:
Code:
2021-06-10T14:59:39+02:00: starting garbage collection on store proxmoxBackup
2021-06-10T14:59:39+02:00: Start GC phase1 (mark used chunks)
2021-06-10T15:09:39+02:00: marked 1% (54 of 5377 index files)
2021-06-10T15:15:29+02:00: marked 2% (108 of 5377 index files)
2021-06-11T03:03:19+02:00: marked 3% (162 of 5377 index files)
There's nothing else going on on this pool, its only dedicated to PBS. The backup volume is, as stated above on a CEPH pool, which consists of 3 x 6 x 3.4TiB WD REDs, so I expected no latency issues from that direction. I am currently a bit at a loss, as of what I could do to get the performance up and any ideas are greatly appreciated.
Thanks,
budy