Use case CephFS integration

May 20, 2017
174
18
58
Netherlands
cyberfusion.io
What is the use case of the recently added CephFS 'integration' in Proxmox?

I have Ceph running on my Proxmox nodes as storage for the VMs. I also want to be able to use mounts within those VMs, and CephFS is suitable for that.

So is the CephFS 'integration' in Proxmox meant for running both 1) Ceph serving RBD to VMs and 2) CephFS for mounts within VMs on the same Proxmox nodes?
 
Last edited:
So is the CephFS 'integration' in Proxmox meant for running both 1) Ceph serving RBD to VMs and 2) CephFS for mounts within VMs on the same Proxmox nodes?

Yes, you can do this. I have VM and container images on Ceph RBD and shared network filesystems on CephFS. Here's how I configured mine:
  • enabled metadata servers on three nodes
  • created new pools "fs_data" on HDD (2-repl) and "fs_meta" on SSD (3-repl)
  • created new cephfs using the two pools above
  • copied the admin secret into the VM under /etc/priv/ceph.client.admin.secret
  • created a systemd unit /etc/systemd/system/mnt-ceph.mount
Here's what the systemd unit looks like.

[Unit]
Description=Mount CephFS

[Mount]
What=node1,node2,node3:/incoming
Where=/mnt/ceph
Type=ceph
Options=name=admin,secretfile=/etc/priv/ceph.client.admin.secret,_netdev,noatime

[Install]
WantedBy=multi-user.target​

Then enable and start the systemd unit.

If you are using a container (not a VM) you need one more step. You need to change the AppArmor profile to allow ceph mounts inside the container. Add this profile line to /etc/pve/lxc/xyz.conf on the proxmox node.

lxc.apparmor.profile: lxc-container-default-ceph​

And create a new profile called /etc/apparmor.d/lxc/lxc-default-ceph on the proxmox node by copying the lxc-default profile (will already be there) to lxc-default-ceph and making these edits.

> profile lxc-container-default-ceph flags=(attach_disconnected,mediate_deleted) {
> mount fstype=cgroup -> /sys/fs/cgroup/**,
> mount fstype=cgroup2 -> /sys/fs/cgroup/**,
> mount fstype=ceph,​

This is only needed for containers. Note the filename is lxc-default-ceph but the name inside the file is lxc-container-default-ceph.

This setup has been reliable and performance is good.
 
  • Like
Reactions: AlexLup
So , just asking. Is this safe ? I mean, you are exposing the ceph network to the vm. If something happened to the vms (as in, someone getting access to it), you would get access to that ceph network ..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!