I am very confused how to mount a promox cephFS on a client machine - help?!

scyto

Active Member
Aug 8, 2023
453
83
28
I am very confused how i make this work, i have read too many different guides on the interwebs, here is what I have done i know i made a mistakee somewehre....

1. installed ceph-common on my client
2. copied the admin keyring from /etc/ceph on the pve host to /etc/ceph on the client
3. copied the ceph.conf from /etc/pve/ceph on the pve host to /etc/ceph on the client
4. created a new client key ring on pve with the comand root@pve1:/etc/pve/ceph# ceph auth get-or-create client.docker01 mon 'allow r' osd 'allow rwx pool=docker' -o /etc/pve/ceph/ceph.client.docker01.keyring
5. copied this to the client at /etc/ceph
6. on the client issued sudo ceph -s (this is supposed to check the comms and config)

sudo ceph -s just hangs (in fact any command just seems to hang)

i haven't tried editing fstab yet as i assume i have to get ceph -s working?

what mistake did i make?

this is my ceph.conf, my ceph cluster is a ok, and yes the client can ping the mons on ipv6 (sudo ceph ping also fails)

Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = fc00::/64
         fsid = 5e55fd50-d135-413d-bffe-9d0fae0ef5fa
         mon_allow_pool_delete = true
         mon_host = fc00::83 fc00::82 fc00::81
         ms_bind_ipv4 = false
         ms_bind_ipv6 = true
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = fc00::/64

[mds.pve1]
         host = pve1
         mds_standby_for_name = pve

[mds.pve1-1]
         host = pve1
         mds_standby_for_name = pve

[mds.pve2]
         host = pve2
         mds_standby_for_name = pve

[mds.pve2-1]
         host = pve2
         mds_standby_for_name = pve

[mds.pve3]
         host = pve3
         mds_standby_for_name = pve

[mds.pve3-1]
         host = pve3
         mds_standby_for_name = pve

[mon.pve1-IPv6]
         public_addr = fc00::81

[mon.pve2-IPv6]
         public_addr = fc00::82

[mon.pve3-IPv6]
         public_addr = fc00::83
 
Last edited:
my mons are up , for example on pve1

Code:
# sudo ss -tunlp | grep 6789
tcp   LISTEN 0      512       [fc00::81]:6789          [::]:*    users:(("ceph-mon",pid=1457,fd=29))                                                                                                             
root@#sudo ss -tunlp | grep 3300
tcp   LISTEN 0      512       [fc00::81]:3300          [::]:*    users:(("ceph-mon",pid=1457,fd=28))

i can telnet to these ports and they connect (are there commands i can issue via telnet, or does that just verify the socket connection is possible)
 
I can ssh between the PVE nodes on IPV6
I can't ssh from the docker VM to the PVE node IPV6

Ping works.

I have a route from the client

Code:
alex@Docker01:~$ traceroute fc00::81
traceroute to fc00::81 (fc00::81), 30 hops max, 80 byte packets
 1  xxxx:xxxx:830:1::1 (xxxx:xxxx:830:1::1)  0.363 ms  0.317 ms  0.245 ms
 2  pve1 (fc00::81)  0.418 ms  0.332 ms  0.346 ms

interesting, there are no firewall rules in place

does ceph need SSH to work?
 
Last edited:
Ok, i am starting to think this is a network issue - probably with routing in the proxmox node linux kernel - but i am stumped as to what give ICMP defintely works

in my /etc/sysctl.conf i have this, i thought this allowed for full IPv4 and IPv6 routing....

Code:
net.ipv6.conf.all.forwarding=1
net.ipv4.ip_forward=1

and the routing table looks good on the proxmox nodes

Code:
root@pve1:/etc/ceph# ip -6 route show
xxxx:xxxx:830:1::/64 dev vmbr0 proto kernel metric 256 pref medium
fc00::81 dev lo proto kernel metric 256 pref medium
fc00::82 nhid 22 via fe80::d8:35ff:fede:a8cd dev en06 proto openfabric metric 20 onlink pref medium
fc00::83 nhid 23 via fe80::9f:84ff:fecc:ec37 dev en05 proto openfabric metric 20 onlink pref medium
fe80::/64 dev enp87s0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev en05 proto kernel metric 256 pref medium
fe80::/64 dev en06 proto kernel metric 256 pref medium
default via xxxx:xxxx:830:1::1 dev vmbr0 proto kernel metric 1024 onlink pref medium

note the fc00:: network is a private mesh running over thunderbolt-net so the proxmox node kernel is repsonsible for routing packets that come in on its public IPv6 and routing that to the fc00:: net and that appears to work in general - pings, telnet, ssh connects etc - but no traffic flows on those sockets....
 
Last edited:
ok to get me past this issue i added a second public network

I added 3 mons for this network

now i get this, which makes no sense as both sides have method 2 ...

Code:
alex@Docker01:/etc/ceph$ sudo ceph -s
2025-04-18T17:37:16.338-0700 7f5c85bb86c0 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]

ceph auth ls shows the keyring as follows for the client
Code:
client.docker01
        key: redacted==
        caps: [mon] allow r
        caps: [osd] allow rwx pool=docker

i see that someone else mentioned the same error a year or so ago and it didn't affect mounting,

so i am moving on to trying mounting
 
Last edited: