Since there isn't a lot of info on how to get CephFS running on Proxmox I thought it was a nice idea to write how I got a proof of concept style setup "working" (there are some limitations that I currently don't know how to fix).
First up this is my setup:
hostname: pm1
Running on a physical server as a single node Proxmox cluster and a single node Ceph cluster. This computer is also a host for all VM's used.
hostname: prox1, prox2, prox3
Three VM's to have access to a working Proxmox and Ceph cluster.
hostname: ubuntu1
Ubuntu Live VM.
On prox1-3 you follow the standard Ceph server install guides. Once you get the cluster up and running it's time to start working on the MDS.
First you need to install ceph-mds on one of the nodes, so run on one of them:
Next you can follow the Ceph manual:
http://docs.ceph.com/docs/master/install/manual-deployment/#adding-mds
Note that clustername isn't the same as your Proxmox clustername, in step 5 the clustername is used to load the corresponding .conf file. So if you use 'blabla' as clustername it will try to load blabla.conf. If it fails it will load default settings from ceph.conf if I understand correctly. After mine failed in step 5 I just did step 6 and everything seems to be working.
Don't try to check the Ceph status now, it won't show up yet.
Up next is creating the pools and the actual fs, follow this Ceph manual for that:
http://docs.ceph.com/docs/master/cephfs/createfs/
And finally run this command, change the pool names if you used different ones:
Congrats, you now have a running cephfs on Proxmox.
To test if it's working I used Ubuntu 16.04 LTS Live in a VM, open a terminal and use the following command to install needed Ceph components.
With that installed you can follow this Ceph manual:
http://docs.ceph.com/docs/master/cephfs/fuse/
In step 2 you need to run:
In my setup the CephFS mount has only write access for root, but it works and files placed in the folder are still there when I reboot the Ubuntu VM and install the client again. write access can be solved by creating a folder as root and setting the permissions on the folder.
First up this is my setup:
hostname: pm1
Running on a physical server as a single node Proxmox cluster and a single node Ceph cluster. This computer is also a host for all VM's used.
hostname: prox1, prox2, prox3
Three VM's to have access to a working Proxmox and Ceph cluster.
hostname: ubuntu1
Ubuntu Live VM.
On prox1-3 you follow the standard Ceph server install guides. Once you get the cluster up and running it's time to start working on the MDS.
First you need to install ceph-mds on one of the nodes, so run on one of them:
Code:
apt install ceph-mds
Next you can follow the Ceph manual:
http://docs.ceph.com/docs/master/install/manual-deployment/#adding-mds
Note that clustername isn't the same as your Proxmox clustername, in step 5 the clustername is used to load the corresponding .conf file. So if you use 'blabla' as clustername it will try to load blabla.conf. If it fails it will load default settings from ceph.conf if I understand correctly. After mine failed in step 5 I just did step 6 and everything seems to be working.
Don't try to check the Ceph status now, it won't show up yet.
Up next is creating the pools and the actual fs, follow this Ceph manual for that:
http://docs.ceph.com/docs/master/cephfs/createfs/
And finally run this command, change the pool names if you used different ones:
Code:
ceph osd pool application enable cephfs_metadata cephfs
Congrats, you now have a running cephfs on Proxmox.
To test if it's working I used Ubuntu 16.04 LTS Live in a VM, open a terminal and use the following command to install needed Ceph components.
Code:
sudo apt install ceph-common ceph-fuse
With that installed you can follow this Ceph manual:
http://docs.ceph.com/docs/master/cephfs/fuse/
In step 2 you need to run:
Code:
sudo mkdir -p /etc/pve/priv
sudo scp {user}@{server-machine}:/etc/pve/priv/ceph.client.admin.keyring /etc/pve/priv/ceph.client.admin.keyring
In my setup the CephFS mount has only write access for root, but it works and files placed in the folder are still there when I reboot the Ubuntu VM and install the client again. write access can be solved by creating a folder as root and setting the permissions on the folder.
Last edited: