Same issue here.
I was trying to mont cephfs on a VM.
After ...
Code:
mkdir /etc/ceph
ssh root@<node-ip> "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
... I tried to get a keyring to authenticate client.admin to access the cepph cluster with
Code:
ssh root@<node-ip> "sudo ceph fs authorize cephfs client.admin / rw" | sudo tee /etc/ceph/ceph.client.admin.keyring
That one led to the error message that this user is already there and cannot created again followed by the advice to remove it prior creating it anew.
With ...
Code:
ssh root@<node-ip> "sudo ceph auth rm client.admin"
followed by
Code:
ssh root@<node-ip> "sudo ceph fs authorize cephfs client.admin / rw" | sudo tee /etc/ceph/ceph.client.admin.keyring
I removed the already existing client.admin, got an new client.admin and the desired keyring - but then ceph was messed up as described above.
Questions:
- How can I retrieve the client-admin keyring to be used to authenticate a cephfs mount?
- What exactly needs to be done with the new admin keyring created to make the cluster work again?
(As a possible solution if somebody falls in the same trap again ... Oo )
Fatzit:
At the end I had to remove the whole ceph installation from the cluster and reinstall it from scratch with running something like this on every node ...
--- Attention ---
---
--- This will probably be only the last resort prior reinstalling the whole cluster from scratch !!!
--- Dont experiment with such things on production systems if you dont have proper backups of all relevent data !!!
---
Code:
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /var/lib/ceph/mon/ /var/lib/ceph/mgr/ /var/lib/ceph/mds/
pveceph purge
apt purge ceph-mon ceph-osd ceph-mgr ceph-mds
rm /etc/init.d/ceph
pveceph install
... to solve that carelessly self induced problem, just because I did not had the knowledge to fix such a bad thing. Nor do I have it today.
Conclusions:
Well ... Just don't tamper with any kind of "admin" users at all - nowhere - never - ever.
Especially if you don't have the understanding of the consequences....
Create your own users instead.
e.g.
Code:
ssh root@<node-ip> "sudo ceph fs authorize cephfs client.stephan / rw" | sudo tee /etc/ceph/ceph.client.stephan.keyring
You can use that on with ceph-fuse or fstab as well.
- Manual ceph-fuse Mount
Code:
ceph-fuse --id stephan -k /etc/ceph/ceph.client.stephan.keyring /mnt/cephfs
or
Code:
ceph-fuse --id stephan -k /etc/ceph/ceph.client.stephan.keyring --client_mds_namespace your-cephfs /mnt/your-cephfs
if you have more than one fs set up on the cluster
- fstab ceph-fuse Mount
Code:
client_mountpoint=/,id=stephan /mnt/cephfs fuse.ceph defaults,_netdevnoatime 0 0
Code:
client_mountpoint=/,client_fs=cephfs,id=stephan /mnt/cephfs fuse.ceph defaults,_netdev,noatime 0 0
- Manual kernel Mount
Code:
mount -t ceph <mon-ip>[,<monip>]:/ /mnt/cephfs -o name=stephan
- fstab kernel Mount
Code:
<mon-ip>[,<monip>]:/ /mnt/cephfs ceph name=stephan,_netdev,noatime 0 0
By now I did not find out how to kernel mount a non default cephfs with fstab, but I guess there is an option to be passed as well.
Make sure that you use the _netdev option in any case to mount the fs after networking has been completed.