So is the CephFS 'integration' in Proxmox meant for running both 1) Ceph serving RBD to VMs and 2) CephFS for mounts within VMs on the same Proxmox nodes?
Yes, you can do this. I have VM and container images on Ceph RBD and shared network filesystems on CephFS. Here's how I configured mine:
- enabled metadata servers on three nodes
- created new pools "fs_data" on HDD (2-repl) and "fs_meta" on SSD (3-repl)
- created new cephfs using the two pools above
- copied the admin secret into the VM under /etc/priv/ceph.client.admin.secret
- created a systemd unit /etc/systemd/system/mnt-ceph.mount
Here's what the systemd unit looks like.
[Unit]
Description=Mount CephFS
[Mount]
What=node1,node2,node3:/incoming
Where=/mnt/ceph
Type=ceph
Options=name=admin,secretfile=/etc/priv/ceph.client.admin.secret,_netdev,noatime
[Install]
WantedBy=multi-user.target
Then enable and start the systemd unit.
If you are using a container (not a VM) you need one more step. You need to change the AppArmor profile to allow ceph mounts inside the container. Add this profile line to /etc/pve/lxc/xyz.conf on the proxmox node.
lxc.apparmor.profile: lxc-container-default-ceph
And create a new profile called /etc/apparmor.d/lxc/lxc-default-ceph on the proxmox node by copying the lxc-default profile (will already be there) to lxc-default-ceph and making these edits.
> profile lxc-container-default-ceph flags=(attach_disconnected,mediate_deleted) {
> mount fstype=cgroup -> /sys/fs/cgroup/**,
> mount fstype=cgroup2 -> /sys/fs/cgroup/**,
> mount fstype=ceph,
This is only needed for containers. Note the filename is lxc-default-ceph but the name inside the file is lxc-container-default-ceph.
This setup has been reliable and performance is good.