PBS to backup CephFS

Apr 18, 2016
104
10
38
46
dynfi.com
I am getting back to these threads :

Because none of them have really been answered correctly from my point of view.
And none of them has been answered by the Proxmox Support Team.

One of the thread is suggesting to use pbs-client to do the backups which might lead to couple of problems :
  1. target CephFS volume to be backed-up will have to be mounted
  2. if pbs-client is deployed on one node, what happens if this node is down ? (= no cluster scope for this)
  3. pbs-client might be is definitly slow (tests suggest a speed of 32.75 MiB/s)

This is illustrated below.
Code:
root@pve01:~# proxmox-backup-client backup CephFS.pxar:/mnt/pve/CephFSPool/ --repository backup@pbs@192.168.10.21:BckpCephFS
Starting backup: host/pve1/2021-06-08T16:20:14Z
Client name: pve1
Starting backup protocol: Tue Jun  8 18:20:14 2021
No previous manifest available.
Upload directory '/mnt/pve/CephFSPool/' to 'backup@pbs@192.168.10.21:8007:BckpCephFS' as CephFS.pxar.didx
CephFS.pxar: had to backup 822.32 MiB of 11.34 GiB (compressed 754.83 MiB) in 25.11s
CephFS.pxar: average backup speed: 32.75 MiB/s
CephFS.pxar: backup was done incrementally, reused 10.53 GiB (92.9%)
Uploaded backup catalog (352 B)
Duration: 25.13s
End Time: Tue Jun  8 18:20:39 2021

As a comparison the backup pool has local write speed of ± 250 MB/s :
Code:
root@pbs1:/mnt/datastore/backup/BckpCephFS# dd if=/dev/zero of=MonFichier_local bs=1k count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 4.03585 s, 254 MB/s

Will there be any work done by the Proxmox team in order to allow PBS to make more efficient backup of CephFS ?
Do you have any advise in order to try to speedup things in the PBS for such scenario ?

As a comparison, we have the same storage that will be used to backup VMs which has a performance of :
Code:
INFO: starting new backup job: vzdump 102 --storage PmxBckp --mode snapshot --node pve1 --remove 0
INFO: Starting Backup of VM 102 (qemu)
INFO: Backup started at 2021-06-08 18:53:34
INFO: status = running
INFO: VM Name: test-VM
INFO: include disk 'scsi0' 'NVMePool:vm-102-disk-0' 20G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/102/2021-06-08T16:53:34Z'
INFO: skipping guest-agent 'fs-freeze', agent configured but not running?
INFO: started backup task 'ab2c6310-77ca-4f57-a4e8-c4e6b8f8b2e9'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: created new
INFO:   2% (612.0 MiB of 20.0 GiB) in 3s, read: 204.0 MiB/s, write: 166.7 MiB/s
INFO:  10% (2.1 GiB of 20.0 GiB) in 6s, read: 526.7 MiB/s, write: 165.3 MiB/s
INFO:  53% (10.7 GiB of 20.0 GiB) in 9s, read: 2.8 GiB/s, write: 164.0 MiB/s
INFO: 100% (20.0 GiB of 20.0 GiB) in 11s, read: 4.7 GiB/s, write: 82.0 MiB/s
INFO: backup is sparse: 18.39 GiB (91%) total zero data
INFO: backup was done incrementally, reused 18.39 GiB (91%)
INFO: [B]transferred 20.00 GiB in 11 seconds (1.8 GiB/s)[/B]
INFO: Finished Backup of VM 102 (00:00:15)
INFO: Backup finished at 2021-06-08 18:53:49
INFO: Backup job finished successfully
TASK OK

So the ∆ between "file backup" --> "local backup" --> "volume backup" is HUGE.
Any advise on how to obtain some reasonable figures with file level backup will be welcome.


Thx !
 
Last edited:

ph0x

Active Member
Jul 5, 2020
917
144
43
/dev/null
I run pbs-client in a HA VM with access to the CephFS which is acceptably fast (350 GB in ~ 25 min).
You may not forget that for a vzdump a dirty bitmap of the rbd image is created, where for the CephFS every file has to be checked. Therefore the calculated throughput is a bit misleading since it only calculates transferred data divided by time. Most of the time, however, is used to check what actually has to be transferred. Thus, I doubt that there is much room for improvement when it comes to backing up a CephFS.
 
Last edited:
Apr 18, 2016
104
10
38
46
dynfi.com
I was thinking of mounting the CephFS directly in PBS using kernel CephfS-Client.

This will surely speed-up the backup. My tests have revealed a 350 MB/s which would be way better than the actual speed we have.

What do you think of the idea ?
 

ph0x

Active Member
Jul 5, 2020
917
144
43
/dev/null
Sounds nice, I should test this as well. Although I don't think that the network is the bottleneck here, but that could still be a performance boost.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!