Non-PVE Ceph Cluster Member


Active Member
Jul 28, 2015
I'm running a cluster in PVE and ceph as my main storage as is the norm these days, but for reasons that may or may not be common, I don't want to run PVE on all my guests or several of my physical machines. I would like to be able to install just enough software however to have Proxmox VE manage their ceph. For what it matters, the main purpose of these systems is to export ceph disks over fibre channel and to export additional OSDs. I could use Proxmox VE on it, but I'd much prefer to be closer to stock Debian 10 on those systems.

So what do I need to add from PVE to get "Just enough Proxomox" to let PVE push out all the ceph configuration and keys, manage the OSDs, and, maybe reboot them.
First, this is not really supported, so even if maybe possible (I'm not really sure, depends on how much effort you're willing to do) you'll always have some subtle differences in kernel, and other relevant packages, and that may cause issues along the way.

That said, you'd at least require the pve-cluster and join the node manually (ensure you have our repos activated so corosync/kronosnet dependencies are pulled from there, not Debian). This gets you our realtime distributed configuration cluster filesystem.

Then you may need to create a symlink for /etc/ceph/ceph.conf -> /etc/pve/ceph.conf and setup osds and whatever manually, with native ceph as pveceph is not available - that tool is in pve-manager package, which pulls in pretty much the whole Proxmox VE stack.
The other nodes may see some side information of the that half-PVE node, maybe run into some errors here and there, especially in the webinterface when dealing with cluster wide stuff or that half-PVE node.

On the long run it may well get a PITA to maintain and manage, possible if you don't mind possible lots of time invested and have a good experience with Debian, PVE and ideally even our backend stack dealing with ceph.

FWIW: I did not test this, and I do not recommend it.
I'd really advise running Proxmox VE on those nodes, you can also install it on Debian if you do not want to set them up fresh:
  • Like
Reactions: Alwin Antreich
Running Proxmox on a Physical node is, a bit begrudgingly, fine. That said, One of my goals is to have several of the VMs using the ceph cluster. Running PVE on those is quite overkill. I'll test out installing pve-cluster on the VMs and making sure they get 0 votes (They shouldn't participate in the quorum!). This probably should be supported in some sense. if I encounter any major bugs, (where) should I report them?
Running Proxmox on a Physical node is, a bit begrudgingly, fine
How so? It's a Debian derivate, there's no real cost increase with using PVE and you get a full-featured management stack.

I'll test out installing pve-cluster on the VMs and making sure they get 0 votes (They shouldn't participate in the quorum!).
What, in which VMs? You want to do this in all in VMs, hosted on the same cluster you want to integrate this?
Sorry, I only read:
run PVE on all my guests or several
correctly now.
Why would you want to do this? Why not add the OSDs directly on the PVE host?
Doing it in VMs is not a good idea, for neither ceph-osds nor PVE instances! With each you will get just problems, especially on cold-start.

This probably should be supported in some sense
To be sure we're on the same page: Can you please elaborate your actual desired setup and the reason why there need to be VMs for any ceph daemon, in detail?
I could have been more clear. There are two use cases for the 'partial join' I'm looking for.

The first is for allowing physical hosts to export some ceph services (Mostly OSD, though MON or FDS should be possible), while also keeping them on a pure debian distro to avoid having issues with whatever changes proxmox provides. While it's great for my VM hosting, I don't know how well it would work for developing a non-virtualized SAN head.

The second use case is for virtual machines, and in some cases, possibly non-debian virtual machines, to allow proxmox to push out ceph configuration, including changes to mon lists and credentials, so that they can interact with files in cephfs, rbd, and RGW stores in a semi-automated way without having to update the configuration on the VMs manually, in a way similar to what ceph-ansible would provide, but managed from the Proxmox GUI/Ceph configuration sharing service.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!