[TUTORIAL] What are the steps to create and mount CephFS on a Linux VM?

Mayank006

Member
Dec 6, 2023
55
0
6
It will help if it is GUI-based steps to create CephFS. I currently have configured Ceph as shared storage on all 3 nodes.
Also, Linux VMs are on local storage. I have to use CephFs to create a shared folder between 2 VMs on different nodes.
 
This might not be the best way but it is they way I have been doing it to mount my CephFS storage into some VMs:

Code:
* I will reference Debian/Ubuntu commands as that is the distribution I use.

1) On your VM install ceph-common package: {sudo} apt install ceph-common
2) On your VM execute: echo "CONTENTS OF A VALID CEPH KEY" > /etc/ceph/[I USED THE SHARENAME].secret
3) Create the mount directory: {sudo} mkdir -p /mnt/[I USED THE SHARE NAME]
4) Test the mount with: {sudo} mount -t ceph [IP ADDRESSES OF YOUR NODE. SEPARATED BY A SINGLE COMMA]:/ [MOUNT DIRECTORY] -o name=[USERNAME TO MOUNT AS],secretfile=[PATH TO KEY FILE],fs=[SHARE NAME]
5) Once working, then un-mount the share using umount [PATH TO MOUNT DIRECTORY]
6) Update your /etc/fstab file as follows: {sudo} nano /etc/fstab
        # Mount Ceph storage for [SHARE NAME]
        [IP ADDRESSES OF YOUR NODE. SEPARATED BY A SINGLE COMMA]:/ [MOUNT DIRECTORY] ceph name=[USERNAME TO MOUNT AS],secretfile=[PATH TO KEY FILE],fs=[SHARE NAME],noatime,_netdev 0 0
7) run the following to mount the share: (it will be auto mounted when the system boots afterwards)       
    {sudo} systemctl daemon-reload && {sudo} mount -a
 
  • Like
Reactions: Mayank006
I didn't specify a mount size on mine and when I created the CephFS share there was no way to specify the size under Proxmox VE. I currently have 2 Ceph clusters one that is hyper-converged within my Proxmox VE cluster of 7 nodes and another that is a cluster of 3 nodes that is only used for Ceph though I set it up using Proxmox VE as it was more familiar with that method. Though I do plan to change out my second Ceph storage cluster from running Proxmox VE to running Debian 12 and install Ceph as per their instructions at some point as I do not need to host any VMs or containers on that cluster.
 
  • Like
Reactions: Mayank006
This might not be the best way but it is they way I have been doing it to mount my CephFS storage into some VMs:

Code:
* I will reference Debian/Ubuntu commands as that is the distribution I use.

1) On your VM install ceph-common package: {sudo} apt install ceph-common
2) On your VM execute: echo "CONTENTS OF A VALID CEPH KEY" > /etc/ceph/[I USED THE SHARENAME].secret
3) Create the mount directory: {sudo} mkdir -p /mnt/[I USED THE SHARE NAME]
4) Test the mount with: {sudo} mount -t ceph [IP ADDRESSES OF YOUR NODE. SEPARATED BY A SINGLE COMMA]:/ [MOUNT DIRECTORY] -o name=[USERNAME TO MOUNT AS],secretfile=[PATH TO KEY FILE],fs=[SHARE NAME]
5) Once working, then un-mount the share using umount [PATH TO MOUNT DIRECTORY]
6) Update your /etc/fstab file as follows: {sudo} nano /etc/fstab
        # Mount Ceph storage for [SHARE NAME]
        [IP ADDRESSES OF YOUR NODE. SEPARATED BY A SINGLE COMMA]:/ [MOUNT DIRECTORY] ceph name=[USERNAME TO MOUNT AS],secretfile=[PATH TO KEY FILE],fs=[SHARE NAME],noatime,_netdev 0 0
7) run the following to mount the share: (it will be auto mounted when the system boots afterwards)      
    {sudo} systemctl daemon-reload && {sudo} mount -a
Last question Do I need to perform all 1-7 commands on all VMs that are mounting/accessing to one shared CephFs?
 
Any VM that is going to use the CephFS storage location would need to have these commands run. I use it for storing Docker volumes in a shared space as I have had issues using NFS or CIFS with some services that I run (SQL lite DBs). What I did was installed my distro of choice and then installed Docker and also the CephFS storage and then clone that VM as many times as I needed and I have a sudo script that I run once the VM is cloned to make it unique (re generate SSH keys, update MAC addresses, changes password(s), change hostname and hosts file, etc.) from the other clones.
 
  • Like
Reactions: Mayank006
Once you have made the CephFS created (can be done via the GUI once you have made some metadata servers)

I just ran (on each host I wanted it on)

Code:
mount -t ceph admin@.cephfs=/ /ceph

Then you can use a mountpoint LXC command to allow that mount through. Even to a subdirectory.

On the hosts, I simply made a small unit/service file to run the startup mount once the nodes are up.

To note.. my 'CephFS' is called 'cephfs' and the local mountpoint is /ceph
 
Last edited:
This might not be the best way but it is they way I have been doing it to mount my CephFS storage into some VMs:

may want to mention that the vm in question has to have access to the ceph public network in some manner. I imagine this tutorial was written in an environment where all the various traffic types are travelling on a single vmbr; this is bad practice on multiple levels.
 
may want to mention that the vm in question has to have access to the ceph public network in some manner. I imagine this tutorial was written in an environment where all the various traffic types are travelling on a single vmbr; this is bad practice on multiple levels.
The traffic is split out on various physical and virtual networks.
 
somewhere in there you might have to ensure that your monitors are listed under [global] in /etc/ceph/ceph.conf. This was keeping me from establishing a connection using steps 1-7 above.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!