ceph client connecting to cephFS - what am i doing wrong?

scyto

Active Member
Aug 8, 2023
376
69
28
I would like to connect a VM to cephFS by mounting the cephFS in the VM
I am having issues with both the kernel driver in the VM and the ceph-Fuse module - they never find the mds

On proxmox:
  • CephFS (reef) fully configured and working in proxmox
  • Two cephFS - one called ISOs and one called docker
  • These work fine on the proxmox cluster
  • v1 and v2 ceph protocol is enabled and running on each of the ports
  • firewall turned off for troubleshooting
On the client (a debian 5.10 system):
  • installed ceph-fs-common and ceph-fuse in the Debian guest - this is a nautilus client i think... but i was under impression that should be ok...
  • confirmed the guest can ping the mon/mds IPv6 address
  • mkdir /etc/ceph
  • made the keyring ssh root@pve1 "sudo ceph fs authorize docker client.docker01 / rw" | sudo tee /etc/ceph/ceph.client.docker01.keyring
  • made minimal conf ssh root@pve1 "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
  • set the correct chown permissions on both as per ceph docs
When I try and do any kind of mount it times out (same in ceph-fuse)

for example
Code:
sudo mount -t ceph :/ /mnt/cephfs -o name=docker01

or

 sudo mount -t ceph :/ /mnt/cephfs -o name=docker01,mds_namespace=docker

then it hangs and exits, dmesg shows this
Code:
[97878.802920] libceph: mon0 (1)[fc00::81:6789]:6789 socket closed (con state CONNECTING)
[97880.627116] libceph: mon0 (1)[fc00::81:6789]:6789 socket closed (con state CONNECTING)
[97882.642760] libceph: mon2 (1)[fc00::83:6789]:6789 socket closed (con state CONNECTING)
[97883.602517] libceph: mon2 (1)[fc00::83:6789]:6789 socket closed (con state CONNECTING)
[97884.595151] libceph: mon2 (1)[fc00::83:6789]:6789 socket closed (con state CONNECTING)
[97885.715091] libceph: mon1 (1)[fc00::82:6789]:6789 socket closed (con state CONNECTING)
[97887.602452] libceph: mon1 (1)[fc00::82:6789]:6789 socket closed (con state CONNECTING)
[97888.017092] ceph: No mds server is up or the cluster is laggy

cephmon proces is listening on each node like it should
Code:
root@pve1:/etc/pve/priv/ceph# lsof -i :6789
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
ceph-mon 1503 ceph   28u  IPv6  22670      0t0  TCP [fc00::81]:6789 (LISTEN)

I am stumped at what i should try next?
 
Last edited:
[97878.802920] libceph: mon0 (1)[fc00::81:6789]:6789 socket closed (con state CONNECTING)
It looks like somehow the port number 6789 became part of the IPv6 address.

Check your ceph.conf if the IPv6 addresses for the MONs are correct. In the end you do not need the default port in the ceph.conf.
Just list the IPv6 addresses in the mon_host line.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!