Hey guys
We are using ceph-fuse to mount a CephFS volume for Proxmox backups at `/srv/proxmox/backup/`.
Recently, I noticed that the backup volume kept running out of free space and therefore the backup jobs were failing (we had a Ceph quota of 2 TB in place on the pool for safety reasons). Here is what struck me:
As you can see, I increased the Ceph quota from 2 TB to 3 TB. But that's not a sustainable solution.
This constant "growth" has been going on for a few days now and every night, when the backup job is running, the reported disk usage by ceph-fuse increases - the free space never seems to be reclaimed.
The stats reported by `du` seem about right.
Any ideas?
Here is how `/srv/proxmox/backup/` is mounted:
We are using ceph-fuse to mount a CephFS volume for Proxmox backups at `/srv/proxmox/backup/`.
Recently, I noticed that the backup volume kept running out of free space and therefore the backup jobs were failing (we had a Ceph quota of 2 TB in place on the pool for safety reasons). Here is what struck me:
Code:
root@proxmox-a:~# du -sh /srv/proxmox/backup/
969G /srv/proxmox/backup/
root@proxmox-a:~# df -h /srv/proxmox/backup/
Filesystem Size Used Avail Use% Mounted on
ceph-fuse 4.5T 2.2T 2.3T 49% /srv/proxmox/backup
root@proxmox-a:~# ceph df detail
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
19442G 9172G 10269G 52.82 871k
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE RAW USED
backup-data 1 N/A 3000G 2176G 48.14 2344G 558031 544k 324M 129M 6528G
[...]
As you can see, I increased the Ceph quota from 2 TB to 3 TB. But that's not a sustainable solution.
This constant "growth" has been going on for a few days now and every night, when the backup job is running, the reported disk usage by ceph-fuse increases - the free space never seems to be reclaimed.
The stats reported by `du` seem about right.
Any ideas?
Here is how `/srv/proxmox/backup/` is mounted:
Code:
root@proxmox-a:~# mount | grep ceph-fuse
ceph-fuse on /srv/proxmox/backup type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
Last edited: