Mounting ceph pool for total beginner

Bojan Pogacar

New Member
Oct 22, 2017
5
1
1
39
Hello!

I've 4 testing nodes in cluster. I've successfully created ceph cluster and it is working well for VM storage (ceph-vm)

I've created another pool (ceph-storage) to mount it from VM to store files.

Is that possible? If so, how can I mount it?

mount -t ceph 10.10.10.1:/ /mnt/ceph-storage -o name=admin,secret=BQDz/ehZBRmwFhAAhkpfVQ5/mL8NLJLLOScsaw==

reports
mount: /mnt/ceph-storage: cannot mount 10.10.10.1:/ read-only.

Please help, cant find a solution online.

Thank you.
 
Check this post for guidance. Note that what your created is block storage.

https://www.virtualtothecore.com/en...unt-ceph-as-a-block-device-on-linux-machines/

For example:
rbd create data --size 10000
rbd feature disable data exclusive-lock object-map fast-diff deep-flatten
rbd map data
mkfs.xfs -L data /dev/rbd0
mount /dev/rbd0 /mnt/

Now all you have to do is this on other nodes:
rbd map data
mount /dev/rbd0 /mnt/

Now go to Datacenter and map the Directory.
 
Last edited:
Check this post for guidance. Note that what your created is block storage.

https://www.virtualtothecore.com/en...unt-ceph-as-a-block-device-on-linux-machines/

For example:
rbd create data --size 10000
rbd feature disable data exclusive-lock object-map fast-diff deep-flatten
rbd map data
mkfs.xfs -L data /dev/rbd0
mount /dev/rbd0 /mnt/

Now all you have to do is this on other nodes:
rbd map data
mount /dev/rbd0 /mnt/

Now go to Datacenter and map the Directory.

Hi,

I tried this but got the following error:


rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.


Code:
root@virt1:~# more /etc/ceph/rbdmap
# RbdDevice             Parameters
#poolname/imagename     id=client,keyring=/etc/ceph/ceph.client.keyring
root@virt1:~# more /etc/pve/storage.cfg
dir: local
        disable
        path /var/lib/vz
        content iso,vztmpl
        maxfiles 1
        shared 0

lvmthin: local-lvm
        disable
        thinpool data
        vgname pve
        content rootdir,images

nfs: ISOS
        export /data/isos
        path /mnt/pve/ISOS
        server 192.168.102.2
        content iso
        maxfiles 1
        options vers=3

rbd: Data_vm
        content images
        krbd 0
        pool Data

rbd: Data2_vm
        content images
        krbd 0
        pool Data2
 
Hi guys,

I want to make small fileserver in my home, with advantages of CEPH + Proxmox.
So, to learn things, I've setup it all on VMware Workstation, just for testing and understanding.
What I've achieved:
- working Proxmox cluster
- working CEPH cluster
- 6 OSDs up and in - small ones, as those are only tests.
- created pool for all needed things (containers, images)

Now I'm downloading turnkey-fileserver appliance LXC container. It uses Samba and other sharing techniques to share data.
But it needs locally mounted directory, which then will be shared. So, according to the instructions above here in the thread, I would need to mount my storage RBD image locally (I assume, that because it is impossible to predict the exact space to share, we can create our RBD device of size much bigger then we have as physical disks, right? So even if we have some for example 4TB of usable space, we can make this device like it was 20TB?)

And now to the point: Even if we mount the same RBD device on all nodes, how to deal with High Availability?
I want to create HA resource and attach this LXC fileserver to it. So it will always survive, and would be moved to the other node in case the current one fails. But what about that mountpoint it uses to share the data? It should be probably remounted RW if the machine moves, and I wonder if this wouldn't dirupt transfer in progress. Any clues about it?

You probably can say: set 4-th machine, install just Samba and share using iSCSI from CEPH, but why I would do it - it's small home environment, and hey, I want to learn something :)

Or maybe there is another way to achieve such a 'all-in-one' solution (and don't tell me: buy a NAS :D )

Thanks in advance for any advices for this interesting usecase :)
 
  • Like
Reactions: fvanlint
Hi guys,

I want to make small fileserver in my home, with advantages of CEPH + Proxmox.
[snipped]
Thanks in advance for any advices for this interesting usecase :)

Hello Twinsen, I am in somewhat the same situation as you. One thing that I have found that might interest you is that cephFS can be shared out using SMB/Cifs. If you do this on all your nodes and use DNS load balancing or possibly by using a virtual ip (but I do not yet know how to do that), I think that we could theoretically create the following example:

SMB client connects to smb://smbceph.example.lan, which load balances to 192.168.1.101, 192.168.102 and 192.168.1.103. The CephFS contents are the same on each node because CEPH. Should a node go down, load balancing ensures the connecting client seamlessly moves to the next node.

This example is based on my current understanding, which might well be lacking. It is also entirely possible that I am flat out wrong :). Please point it out if that is the case.

You can find more information about using SMB/CIFS with cephFS by searching for "vfs_ceph.8". (I would have linked directly, but as a new user on this forum I am not allowed to post external links). I am still researching this myself and I have no idea (yet) how to make this work. But perhaps we can help each other (and the community) by figuring this out together!

Edit:
So I searched around and found that ceph.so which is part of vfs_ceph is not included in the samba version that comes with Debian Stretch, but is part of Buster. As such, we would need to import a .deb file. I found the latest version which should include ceph.so as well. packages.debian.org/sid/amd64/samba/download

So that has to be installed with # dpkg -i samba_4.7.4+dfsg-1_amd64.deb
Possibly there also has to be done some cleanup with # apt-get install -f

Next up would be
1. Getting cephFS working, https://forum.proxmox.com/threads/cephfs-installation.36459/
2. Mount cephFS using ceph-fuse
3. Share cephFS mount using SMB vfs_ceph
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!