This fixed it for me. I guess having a mix of 240GB, 512GB, 1TB SSDs, 1TB, 2TB, 4TB 2.5 inch HDDs and 8TB HDDs scattered unevenly across my 3 nodes made the update angry. Thank you so much @Jerek just wished I would have found this and tried it 15 hours ago.
I was just working on that it but it keeps hanging on the partition part so I'm pulling the drives and formatting them with my MacBook pro =/
This version?
https://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/current/amd64/iso-cd/
Edit: Derp just saw you said jesse
right below that it mentions
so I dropped a key in [MDS]
I am going to disable cephx and see if that will help. I went ahead and got some 10gig gear to connect everything properly. so ill be rebuilding the whole thing from scratch for the 10th ish time lol
What does your ceps.conf look like? sorry about blowing you up with questions I appreciate all the help
EDIT:
No idea what any of this means but I ran this: "ceph-mds -i purple1 -d" and its now showing it as running
2017-08-22 17:24:11.820384 7faa7b8ff6c0 0 ceph version 12.1.4...
Started with just doing that and I have been messing with it since.
The previous timeout 110 error was because the ceph versions were different. I have fixed that fired up a ubuntu vm and squared away the ceph versions.
I feel like I have screwed up cephx in the MDS manual setup somehow and I...
I didn't check until after the CephFS creation =/. I am currently now trying to get fuse to work on a VM. So far from what I can tell it says the services and everything are running. Hopefully tonight I can mess with it more and get something to mount.
EDIT 1:
FUSE is getting connection timeout...
OMG I have been working on this for like 2 weeks. I am trying this now. Thank you so much!
I wish it was easier to get CephFS working
edit:
ceph -s still showing mds 0/0/1
Ill try to see if I can get it to mount
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.