Adding a Ceph Storage Pool RBD

Discussion in 'Proxmox VE: Installation and configuration' started by Calanon, Feb 11, 2019.

  1. Calanon

    Calanon New Member

    Feb 4, 2019
    Likes Received:
    Hi ,

    I have two ceph clusters and two proxmox clusters.
    The first cluster is as follows:


    Ceph Cluster version hammer running separately from the proxmox cluster. The storage.cfg contains the config for the Pools that should be added to the promox cluster for storage.


    Ceph Cluster version luminous also running separately from the proxmox cluster. Setup the same way as the first cluster.

    In case you didnt realise already we are trying to migrate the data stored in Ceph from Option "One" above to Option "Two".

    I had the idea of mounting the ceph pool from Option "One" cluster to Option "Two cluster so that we can use the proxmox move disk feature and have the VM data disks on the new proxmox cluster and ceph storage.

    So far I am having problems with this. I contacted ceph experts and they said its a promox thing.

    I added the pool "pool-a" from the other system to the Option "Two" system in the /etc/pve/storage.cfg. I also added the ceph.client.admin.keyring. When I look if it is showing as available storage it shows that it has a communication error.

    Does anyone have any experience or wisdom with this problem?
  2. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Aug 1, 2017
    Likes Received:
    For one, I think you may have a naming error, please look through our documentation.
    Secondly thing, your PVE nodes need to have the stock jewel packages of Ceph otherwise it will not be possible to connect to a Ceph hammer cluster and a luminous cluster (client compat assumed) at the same time. But this is no proxmox thing. ;)
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice