Mounting CEPHFS on a client

Please use the CephFS mount on multiple containers with care, as it may raise conflicts on the host kernel. An alternative approach is to use CephFS as directory storage and use mountpoints on the CT. But as noted in the post below the performance may not be what you expect.
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/page-6#post-263016

Agreed. As matter of fact we would only use BIND MOUNT with container and Kernel module for all clients running outside ProxMox on bare metal (or as VMs not containers).

What is concerning us is that significant difference in performance we are already discussing in the thread mentioned above.
Thank you.
 
Did you enable "FUSE" feature in your container configuration?
How do you enable the "FUSE" feature in container configuration?

Please use the CephFS mount on multiple containers with care, as it may raise conflicts on the host kernel. ...
I was really hoping CephFS mounts on multiple containers was my solution to shared storage among unprivileged containers but this sounds like that's a really bad idea. How risky is this? What IS the best way to share storage among multiple unprivileged containers?
 
If I mount cephsfs as /mnt/bindmounts/cephsfs on the host then use bind mounts to give shared access to the containers would that still pose a problem?

Also, if I don't snapshot/backup is there still a risk to using cephsfs within containers?
 
If I mount cephsfs as /mnt/bindmounts/cephsfs on the host then use bind mounts to give shared access to the containers would that still pose a problem?

Also, if I don't snapshot/backup is there still a risk to using cephsfs within containers?
To what problems are you referring to?
 
To what problems are you referring to?
Please use the CephFS mount on multiple containers with care, as it may raise conflicts on the host kernel. An alternative approach is to use CephFS as directory storage and use mountpoints on the CT. But as noted in the post below the performance may not be what you expect.
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/page-6#post-263016
Or that was my understanding of this thread.

For future readers (since google is ranking this thread highly): There are some performance/conflict issues with mounting cephfs and containers in proxmox related to the kernel sharing. The issues are significant enough that proxmox doesn't think anyone should do this (Instead: https://pve.proxmox.com/pve-docs/chapter-pct.html#_bind_mount_points ). In theory mounting cephfs to VMs or remote clients should be possible but will depend on the client os implementation of ceph playing nice.
(out dated) general reference: https://knowledgebase.45drives.com/kb/kb450228-mounting-cephfs-on-linux-clients/
ymmv
 
Last edited:
  • Like
Reactions: Denham and herzkerl

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!