Access Proxmox managed Ceph Pool from standalone node

stevensedory

Well-Known Member
Oct 26, 2019
44
3
48
39
Hello. To help make migration much easier, I'd like to connect a standalone node I have to the Proxmox managed ceph storage on a three node cluster by adding that storage as RBD on the standalone. The issue is, the current ceph public_network is isolated, because it's directly attached 40G NICs. However, I should in theory be able to add the management network as an additional subnet, but no dice.

My directly attached setup was doing following this article exactly, which involves using IPv6 and OSPF, to create redundant links in the three node ceph 40G loop.

So what I did was add the IPv4 subnet, and the appropriate IPs, but no dice.

Here's the before setting up external access ceph.conf

[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = fc00::/64
fsid = 39103844-feac-46c1-97a8-2e96830147f2
mon_allow_pool_delete = true
mon_host = fc00::1 fc00::2 fc00::3
ms_bind_ipv4 = false
ms_bind_ipv6 = true
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = fc00::/64

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring

[mon.kim-hci01-n01]
public_addr = fc00::1

[mon.kim-hci01-n02]
public_addr = fc00::2

[mon.kim-hci01-n03]
public_addr = fc00::3


Here's the after:

[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = fc00::/64
fsid = 39103844-feac-46c1-97a8-2e96830147f2
mon_allow_pool_delete = true
mon_host = fc00::1 fc00::2 fc00::3 10.110.33.31 10.110.33.32 10.110.33.33 (tried with spaces and commas)
ms_bind_ipv4 = true
ms_bind_ipv6 = true
ms_bind_before_connect = true (tried with and without this)
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = fc00::/64,10.110.33.0/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring

[mon.kim-hci01-n01]
public_addr = fc00::1,10.110.33.31

[mon.kim-hci01-n02]
public_addr = fc00::2,10.110.33.32

[mon.kim-hci01-n03]
public_addr = fc00::3,10.110.33.33

This should work right? I'm trying to access the RBD from a standalone node at 10.110.33.25. Any help/advice is much appreciated.
 
Ceph is either IPv6 or IPv4, but not dual stack AFAIK.

Does your standalone node has a layer 2 connection in fc00::/64? If not you need to route the traffic. Which is OK for a Ceph client.
Oh okay. I will look into how to go about routing it.
 
From what I can see, the monitors and osds only bind to one address, no matter what you put in there.
 
So for the record everyone, as I couldn't get the monitors or osds to bind to more than one address (IPv6 or IPv4), we simply created some static routes on the standalone host, it that enabled it to reach the segregated subnet that ceph is on.

Of course first we had to have IPv6 addresses on both ends. So first, the standalone host had an IPv6 address added to it's management interface. Then on the ceph cluster nodes, each nodes management interface had an IPv6 Address added.

Lastly, from the standalone host, this was ran to create the static routes:

ip -6 route add fc00::1/128 via fd00::1
ip -6 route add fc00::2/128 via fd00::2
ip -6 route add fc00::3/128 via fd00::3

Once ran, we could add the ceph storage on the cluster to the standalone host as RBD via the UI, which made migrating VMs over super easy.

What we did there was simply

1) "Move" the disk of the VM to the RBD storage, which can be done while online. Wait till complete. Move all disks if there are multiple.
2) Shutdown the VM
3) Grab the config "nano /etc/pve/local/qemu-server/XXX.conf"
4) Create a file and past in the config on one of the new cluster's nodes "nano /etc/pve/local/qemu-server/XXX.conf"
5) Make necessary changes to the config (e.g., update network interface names if needed, remove lines that say removed disk as you don't need those, etc.)
6) Edit the config from the UI on the originating host, to ensure the VM doesn't start on boot, disconnect the NIC, and make a note in the name or Notes of the VM that it's been migrated
7) Turn on the VM on the new cluster (it'll just pop up in the UI after you save the XXX.conf file

Boom.
 
  • Like
Reactions: gurubert

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!