Connect Ceph External to existing Cluster PVE

Coudrox

New Member
Jun 20, 2019
1
0
1
46
Hi,

I have a PVE cluster composed of 6 nodes (which we'll call PVE_CLUSTER1)
This PVE cluster uses a Ceph Cluster (which we'll call CEPH_CLUSTER1) whose "monitors" are on the 6 nodes.
On CEPH_CLUSTER1, the following networks are used :

cluster network = 192.168.6.0/24
public network = 192.168.5.0/24

On Ceph, the 2 "pools" RBD are functional and I have no problem using these pools on PVE_CLUSTER1

I created a new PVE cluster (which we'll call PVE_CLUSTER2) on 2 new nodes
I created a new Ceph cluster (which we'll call CEPH_CLUSTER2) whose network features are as follows :

cluster network = 192.168.7.0/24
public network = 192.168.5.0/24

On Ceph, the pool is functional and I have no problem using this pool on PVE_CLUSTER2

My goal is to be able to use the CEPH_CLUSTER2 from PVE_CLUSTER1, so I applied this :

from one of the 2 new nodes of PVE_CLUSTER2, I copied ''/etc/ceph/ceph.client.admin.keyring' to ''/etc/pve/priv/ceph/[ID_STORAGE.keyring] on one of the 6 nodes of PVE_CLUSTER1

I add this storage to PVECLUSTER1 on /etc/pve/storage.cfg :

rbd: ceph-rbd-filer-hdd
content images
krbd 0
monhost 192.168.5.100;192.168.5.101
pool rbd-filer-hdd
username admin

But I have no info on this RBD storage on Proxmox WebUI of PVE_CLUSTER1
Ping is Ok between CEPH_CLUSTER1 and CEPH_CLUSTER2

I can't find any error log on '/var/log/ceph/ceph-mon*.log'

Do you have any idea of my problem ?

Thanks for your help.
 
rbd: ceph-rbd-filer-hdd
content images
krbd 0
monhost 192.168.5.100;192.168.5.101
pool rbd-filer-hdd
username admin
You are using the same Mons for both Clusters?

But I have no info on this RBD storage on Proxmox WebUI of PVE_CLUSTER1
What exactly you mean? There is no Storage entry in the PVE GUI or it gray etc?

Ping is Ok between CEPH_CLUSTER1 and CEPH_CLUSTER2
Are you sure all needed Ports etc. are open between? Did you recheck all of your FW Rules?
 
But I have no info on this RBD storage on Proxmox WebUI of PVE_CLUSTER1
Ping is Ok between CEPH_CLUSTER1 and CEPH_CLUSTER2
You have to create them manually. When Ceph is on the local PVE nodes, then by a convenience option, the storage is created along the Ceph pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!