Access Proxmox managed Ceph Pool from standalone node

stevensedory

Well-Known Member
Oct 26, 2019
38
2
48
39
Hello. To help make migration much easier, I'd like to connect a standalone node I have to the Proxmox managed ceph storage on a three node cluster by adding that storage as RBD on the standalone. The issue is, the current ceph public_network is isolated, because it's directly attached 40G NICs. However, I should in theory be able to add the management network as an additional subnet, but no dice.

My directly attached setup was doing following this article exactly, which involves using IPv6 and OSPF, to create redundant links in the three node ceph 40G loop.

So what I did was add the IPv4 subnet, and the appropriate IPs, but no dice.

Here's the before setting up external access ceph.conf

[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = fc00::/64
fsid = 39103844-feac-46c1-97a8-2e96830147f2
mon_allow_pool_delete = true
mon_host = fc00::1 fc00::2 fc00::3
ms_bind_ipv4 = false
ms_bind_ipv6 = true
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = fc00::/64

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring

[mon.kim-hci01-n01]
public_addr = fc00::1

[mon.kim-hci01-n02]
public_addr = fc00::2

[mon.kim-hci01-n03]
public_addr = fc00::3


Here's the after:

[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = fc00::/64
fsid = 39103844-feac-46c1-97a8-2e96830147f2
mon_allow_pool_delete = true
mon_host = fc00::1 fc00::2 fc00::3 10.110.33.31 10.110.33.32 10.110.33.33 (tried with spaces and commas)
ms_bind_ipv4 = true
ms_bind_ipv6 = true
ms_bind_before_connect = true (tried with and without this)
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = fc00::/64,10.110.33.0/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring

[mon.kim-hci01-n01]
public_addr = fc00::1,10.110.33.31

[mon.kim-hci01-n02]
public_addr = fc00::2,10.110.33.32

[mon.kim-hci01-n03]
public_addr = fc00::3,10.110.33.33

This should work right? I'm trying to access the RBD from a standalone node at 10.110.33.25. Any help/advice is much appreciated.
 
Ceph is either IPv6 or IPv4, but not dual stack AFAIK.

Does your standalone node has a layer 2 connection in fc00::/64? If not you need to route the traffic. Which is OK for a Ceph client.
Oh okay. I will look into how to go about routing it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!