Search results

  1. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    btw you are running snapshotting and not journalbased on the pool. { "name": "<image>", "global_id": "0af1a493-f8dd-483c-8b38-779f1f10be15", "state": "up+stopped", "description": "local image is primary", "daemon_service": { "service_id": "1271169854"...
  2. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    This should make it possible to check it with one command. Check if active_set != minimum_set. rbd --format json --verbose mirror pool status
  3. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    @hainh I might actually be wrong here. I tried it out now and rbd_discard_granularity_bytes cannot be set to < 4096. Setting rbd_skip_partial_discard = false sets rbd_discard_granularity_bytes = 0 internally. It's easy to see this rbd journal inspect --image $IMAGE --verbose Entry: tag_id=1...
  4. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    @hainh The file is /etc/pve/priv/ceph/<storage_name>.conf and the config is rbd discard granularity bytes = 0 A good place to add it is under [global].
  5. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    @hainh thanks. Does it work with the setting? Note that it's the client config, not server, thus you need to set this in the conf on the proxmox host. If you could confirm the fix that would be awesome! I should note that there may be hidden dragons here, nothing confirmed but at least this...
  6. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Note that this in effect disables discards since the same outcome is made by setting rbd discard granularity bytes = 0. I made a fix that stops the misaligned discards from entering the journal. Hopefully that will make sense or becomes modified to fix the problem. This has been an issue since...
  7. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    @hainh Yeah, Add to the ceph conf. In our case AioDiscard that where generated by fstrim (and I guess ext4 journal) made JournalMetadata not update the rbd-image properly. Thus journal_data files were added and never removed. There's some more info here: https://tracker.ceph.com/issues/57396...
  8. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    I'm trying to get some answer about the journal. https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/SEOZ7Y2NTQQAKMA3PAN7WD2N7AUEIMH3/ Did a lot of debug today and the local journal replay hit an error sometime and dies.. Probably related to some I/O that the rbd-mirror handles...
  9. I

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Any chance people with issues could paste the output of rbd journal status and rbd journal info ? It should show more info about the journal right now. So I just had an issue where a VM would not start (start get a timeout), the KVM-process however remains. I checked the above command and turns...
  10. I

    Multiple Ceph-clusters

    Just reminding that we supply a patch for this now, is it possible to include in the master branch? It's been working still with the latest release.
  11. I

    Multiple Ceph-clusters

    Here's the diff This is just for adding multiple cluster support to pools, deployment is done via ceph-deploy instead.
  12. I

    Multiple Ceph-clusters

    Ok, thanks. Have you done any thinking about how it should be implemented if it were to be? Cheers, Josef
  13. I

    Multiple Ceph-clusters

    Hi, I saw that the cluster name for ceph is hard wired in the Cephtools.pm file, is it possible to open it up so that you can specify the cluster name in storage.cfg? Thanks, Josef
  14. I

    Use storage network for migration

    Hi, I was wondering if it's possible to use the storage network instead of the cluster-network for migration, since you often have a lot more bandwidth there. Cheers, Josef
  15. I

    Failed snapshot removal

    Hi, Using Ceph and Proxmox VE, removing a VMs snapshot with memory snapshot enabled, it fails miserably. A normal snapshot without memory snapshot works splendidly. The error in the end sounds scary, does it try to remove the base image? Cheers, Josef dpkg -l pve*...
  16. I

    Cloning is not sparse in CEPH

    Hi, It seems that doing a full clone is quite intense and you loose sparse on images. Any reason for not using rbd flatten instead of qemu-img convert? i.e, first clone the image and then flatten it. Cheers, Josef
  17. I

    Proxmox VE Ceph Server released (beta)

    It seems that doing a full clone is quite intense and you loose sparse on images. Any reason for not using rbd flatten instead of qemu-img convert? i.e, first clone the image and then flatten it. Cheers, Josef

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!