Hey @spirit, my comment related to the "bringing his vm guests to their knees" which is the problem we are seeing with backups to nfs shares. CPU in the vms goes through the roof and they drop off the network when they are being backed up.
The build-in backup works inside qemu. Each new block needs to be written twice. Once to the backup storage and once to the VM storage. This means that the slowest storage will determine the storage speed of the VM. In turn, the CPU load rises because of outstanding IO.
@Alwin There's a problem writing backups to NFS targets that causes guests to become unresponsive. The problem does not occur writing to the same remote server using CIFS rather than NFS. The problem is being tracked in bugzilla at
I'm also looking at writing an rbd solution so backups can't impact on the hypervisors. Our current cloud platform uses a dedicated node within the storage cluster for backups, which offers good separation of backups and the production VMs. We'll look at implementing something similar and will naturally contribute it back if it looks good.
With all the trouble we've been seeing backing up to NFS and CIFS volumes during our proxmox evaluation I started looking at rbd exports today. Does your code only work if it's writing to another ceph volume? I'd like to export to local storage.