Adding non Proxmox hosts to Proxmox Ceph Pool

Madhatter

Renowned Member
Apr 8, 2012
37
2
73
Hi, I know this is not directly related to Proxmox rather than to Ceph, but I'll give it a try.

In my home lab I have 1 Proxmox node (7.latest), and configured Proxmox ceph (Pacific) on it with 2 OSD

I have 2 more non Proxmox hosts I would like to install ceph (?cephadm?) on and add those to the proxmox ceph

Testing this on VMS for now to play with:
Installing cephadm on the Proxmox host did break ceph there sufficiently. (I saw a post here about symlinks that I overwrote).
Trying to manually add a node with the ceph.conf is kind of not working. I seem to miss something very important. Thats why I thought cephadm is for dummies.

Has anyone done that before and has some tips where to start with. Are there any proxmox scripts that I could copy over to manage non Proxmox ceph nodes?

Or should I uninstall Proxmox Ceph and only run cephadm on all hosts?
https://forum.proxmox.com/threads/r...ust go to a nodes,ceph packages for each node.

If anyone wonders about the use case beside of "because I can (or not)"

1) One Host running Proxmox , 2x 1TB HDD spare to run Ceph on for future shared pool between hypervisors (cephfs)
2) Another host is running Debian bullseye with OpenMediavault and Opennebula-node, 2x 1TB HDD spare to run Ceph on
3) Another host running XCP-NG (Centos) with another 2x 1TB HDD spare for ceph.

In my mind maybe even along with a a container on my NAS and a few TB from there as vdisks I have a mix of an additional backup + a shared pool rather than each Hypervisor running isolated by itself.

Again, purely a Lab environment and not for any sort of Production.

Any ideas appreciated.
Thanks Andreas
 
Hey, first and foremost I am not a Ceph nor Proxmox expert but I have played around with both enough to feel comfortable giving some feedback.

First off, how cephadm works is through containers and is itself an orchestrator. Cephadm creates containers for all of its services (monitor, manager, etc.). With that being said cephadm installs use sort of a "live database" for managing the cluster so /etc/ceph/ceph.conf on your host is in a sense irrelevant to the operation of the cluster. you can edit a ceph.conf file all day and it wouldn't change a live setting on your cluster; I have experienced that. I have noticed that the ceph.conf is used for clients so if you install ceph-common you would need that file and a keyring to access the containerized cluster. I'm pretty confident in this working but if I prove to be wrong I apologize in advance.

Proxmox uses standard files and services and contains all of that on the live system (the system itself) so it relies on files located throughout the system.

Sorry if you knew all that but it's just background to the point I'm going to make.

It is not recommended to install cephadm on a Proxmox host because they say its not wise/best practice to install docker (podman falls here as well) on the physical host itself from a security standpoint so unless you are ok with that (you're running a lab so probably) its not an option. Even if you decide to go with it, it will not integrate with the Proxmox Ceph management UI but you could use Ceph's built-in dashboard which is quite nice. I am assuming that you are going for the "single pane" management interface so multiple dashboards might not be what you want.

Being that both installations of Ceph are managed a different way they may simply not be interoperable. If you have resources to spare on your Proxmox host, create a VM for cephadm and passthrough the drives to it.

Another possible option without verification is to enable Ceph on Proxmox then add the Ceph repos to other 2 servers and install ceph manually. Add the other 2 servers to the Proxmox ceph.conf file and copy the conf and admin keyring to them. Start up the services on the 2 servers and hope for the best. Ceph manual deployment.

My personal setup at the moment is 3x Proxmox hosts with podman and cephadm installed on all 3. I actually wanted to keep each software separate as I also use the Ceph cluster outside of Proxmox and Ceph's dashboard provides more control/insight over the cluster in regards to NFS and gateways. This is not best practice/recommended but it works for me with limited hardware availability.

Hope any of that was useful.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!