Security of exposing Ceph Monitors

Nov 14, 2019
35
2
28
34
The Ceph Monitors are supposed to be exposed in the public network, so that clients can reach them in order to mount CephFS by using the kernel driver or FUSE.

What harm could a compromised client do to the Cluster by exploiting the connection to Ceph Monitors? Are the Monitors secure enough not to worry about this scenario?
 
The Ceph Monitors are supposed to be exposed in the public network, so that clients can reach them in order to mount CephFS by using the kernel driver or FUSE.

Note, that while Ceph names it as such, "public_network" should normally not be the WAN, or any other untrusted network.
Whyle the communication is encryped, as long as cephx authentication is not disabled, one can still overload the network and gather data where monitor nodes reside, etc..

What harm could a compromised client do to the Cluster by exploiting the connection to Ceph Monitors? Are the Monitors secure enough not to worry about this scenario?

Here it's important to know what the clients really are. For example, with VMs in Proxmox VE, the client is not the VM operating system, but the QEMU process running on the Proxmox VE hosts, so the client really is the PVE node, not the VM itself (internally).

A VM internally can naturally write as much as it's RBD diskes are limited to, and re-write as often it wants, with the QEMU disk IO rate limits naturally enforced. But, it cannot trigger direct, arbitrary, communication to ceph monitors. So your comporomised client would need to be a Proxmox VE node to "DDOS" the ceph monitors, not a compromised VM. But if you're on a PVE node you can just DDOS that itself, if it's a hyper converged cluster ceph will feel that impact to.

In short, as long as the "public_network" is operated on a restricted network (but not necessarily stand-alone for monitors and their clients) then the monitor security and resource availability is as good as the one from the Proxmox VE nodes itself, and access to those should be restricted anyway, if one wants to operate a secure setup while still having access to a possible insecure environment.
 
We would like to mount CephFS in a VM without using VirtFS, because using VirtFS breaks the live-migration:
Code:
2019-12-11 15:46:22 migrate uri => unix:/run/qemu-server/103.migrate failed: VM 103 qmp command 'migrate' failed - Migration is disabled when VirtFS export path '/mnt/pve/cephfs' is mounted in the guest using mount_tag 'vm101_share'
All solutions I found so far need a network connection to a Ceph Monitor.
https://docs.ceph.com/docs/master/cephfs/kernel/
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!