no i havent figured it out yet, but i have created a new thread and got someones attention there: https://forum.proxmox.com/threads/ceph-tooling-fault-when-creating-mds.59710/
can you elaborate what the replication was and how you changed it to fix it?
after wrangling pveceph with no luck i removed the pool with ceph osd pool delete without any issues.
this is what i got after a while when using the webinterface:
checking storage 'transfer' for RBD images..
failed to remove storage 'transfer': delete storage failed: error with cfs lock...
good evening,
i posted in another thread (https://forum.proxmox.com/threads/proxmox-6-ceph-mds-stuck-on-creating.57524/#post-268549) that was created on the same topic and just hopped on to it, but that thread seems to be dead. so i am trying my luck here to see if this is a general problem...
i didnt know that. can you please point me in the right direction?
also it really doesnt negate that proxmox seems to have a builtin fault (at least in the default distribution).
it seems that debian 10 (buster) is on this version of ceph:
buster (stable) (admin): distributed storage and file system
12.2.11+dfsg1-2.1: all
but IIRC proxmox is on 14.2.2
i would like to use the newer version because some features were introduced that i need.
there seems to be a problem after all.
i have a 3 node proxmox cluster and installed ceph via webinterface on all nodes,
creating a MDS for cephfs fails on all nodes equaly with same error.
any hints what i could do to make this work?
edit: i destroyed the mds and the cephfs via CLI as...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.