Ceph backup

Discussion in 'Proxmox VE: Installation and configuration' started by PiotrD, Sep 3, 2014.

  1. PiotrD

    PiotrD New Member

    Joined:
    Apr 10, 2014
    Messages:
    29
    Likes Received:
    1
    Hi,
    I am experiencing slow backup speed when I backup ceph images in proxmox. I do not see any problems with network and ceph. I would like to know what type of ceph functionailty is used by proxmox to do backups ? ceph export ? I would like to debug ceph, but I do not know how proxmox does backups in ceph. Now backups are sth around 40MB/s and it should be much faster. I did test backups from ceph to local storage.

    UPDATE:
    I did small test:
    standard proxmox backup took ~4 minutes
    using rbd export and lzop was around ~1 min

    However, I am not sure if these equals also I do not know if export is correct way to do live backup.

    Kind regards,
    Piotr D
     
    #1 PiotrD, Sep 3, 2014
    Last edited: Sep 3, 2014
  2. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,484
    Likes Received:
    314
    We also observe that. Backup are done inside qemu, using the normal block driver to read the data (using a qemu backup job).
    Qemu backup jobs read data by 64K block size. Maybe that is limiting performance?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. PiotrD

    PiotrD New Member

    Joined:
    Apr 10, 2014
    Messages:
    29
    Likes Received:
    1
    I tried time qemu-img convert rbd:rbd/vm-101-disk-1 -O raw dump.raw

    real 0m43.892s
    user 0m21.199s
    sys 0m14.446s

    with lzop it should be around ~1.5 min. So it is faster than via GUI. Are you maybe using sth different than this ? I am probably missing something about this :)
     
  4. e100

    e100 Active Member
    Proxmox Subscriber

    Joined:
    Nov 6, 2010
    Messages:
    1,235
    Likes Received:
    24
    Is this configurable? I suspect larger reads would perform much better on some of my servers.
     
  5. PiotrD

    PiotrD New Member

    Joined:
    Apr 10, 2014
    Messages:
    29
    Likes Received:
    1
    And that should be my second question. Is this configurable? :)
     
  6. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,484
    Likes Received:
    314
    No, that is not configurable, an it is non-trivial to change it.
    Besides, 64K reads should be fast, so there must be a bug inside ceph/rados library. Or maybe the check for allocated chunks are slow?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. Mitya

    Mitya Member

    Joined:
    Feb 19, 2013
    Messages:
    51
    Likes Received:
    0
    I have 20MB/s for allocated space and 40 for unallocated (reported sparse).
    Mayde there's easy way to enable readahead at lower level, while still read 64k in qemu's backup level?
     
  8. mo_

    mo_ Member

    Joined:
    Oct 27, 2011
    Messages:
    399
    Likes Received:
    3
    maybe itd be worth it to collaborate with the Ceph guys on this. Especially since theres another issue involving backups to be solved (by the Ceph guys): cache pools. More to the point: if you have a cache pool in front of your Ceph storage and you take a backup, you essentially clear out the entire cache (that used to contain most commonly used objects) by replacing it with the backup. As such a specific read method for backups would be desirable, that would a) read in bigger blocksizes (4M preferrably) and b) "bypass" the cache for such a backup process. This last bit would have to make sure the data is consistent:

    - initiate backup
    - switch cache pool to read-only cache
    - write dirty data to cold storage (data thats changed on the cache)
    - take backup directly from cold storage
    - switch cache pool back to rw cache

    Backups in general are sadly really more of an afterthought for Ceph, as backing up Petabytes of data simply isnt desirable and in the end thats the amount of data Ceph has been designed around. Of course there are also much smaller setups (like the ones that would be sufficient to 'normal' datacenter virtualization) where backups ARE feasible. This is an area where Ceph could still improve quite a bit.
     
    #8 mo_, Sep 5, 2014
    Last edited: Sep 5, 2014
  9. PiotrD

    PiotrD New Member

    Joined:
    Apr 10, 2014
    Messages:
    29
    Likes Received:
    1
    +1 to that. Or maybe someone already reported it to ceph team ?
     
  10. Mitya

    Mitya Member

    Joined:
    Feb 19, 2013
    Messages:
    51
    Likes Received:
    0
    Maybe just use rbd export/import command to backup/restore rbd images? It should be much faster than qemu's backup.
     
  11. liska_

    liska_ Member

    Joined:
    Nov 19, 2013
    Messages:
    115
    Likes Received:
    3
    Hi,
    is here anything new in this area? My backups run around 30MB/s from ceph storage but more than 100mb/s from nfs based storage to the same location. Rados bench shows read around 300mb/s. It is quite unconvenient.
    Isn`t here any settings or anything else I can do to improve performance? I have found nothing regarding this matter on google as well.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice