Hello everyone,
There is a fully functional ceph fs running on a 3 node cluster.
It was created very simply, here is the conf related to mds:
And this is what ceph mds stat shows:
Now for those who are experienced in this kind of thing, you can probably guess what my problem is. When one of my nodes goes down, so does the ceph fs.
I am not necessarily interested in setting all MDS to be active, but at least get failover.
However, the mds is not failing over.
What settings do I need to add to make this work?
I have tried to read up on ceph fs regarding this issue but I cannot understand =(
There is a fully functional ceph fs running on a 3 node cluster.
It was created very simply, here is the conf related to mds:
Code:
[mds]
keyring = /var/lib/ceph/mds/54da8900-a9db-4a57-923c-a62dbec8c82a/keyring
mds data = /var/lib/ceph/mds/54da8900-a9db-4a57-923c-a62dbec8c82a
[mds.VMHost2]
host = VMHost2
[mds.VMHost4]
host = VMHost4
[mds.VMHost3]
host = VMHost3
And this is what ceph mds stat shows:
Code:
cephfs-1/1/1 up {0=54da8900-a9db-4a57-923c-a62dbec8c82a=up:active}
Now for those who are experienced in this kind of thing, you can probably guess what my problem is. When one of my nodes goes down, so does the ceph fs.
I am not necessarily interested in setting all MDS to be active, but at least get failover.
However, the mds is not failing over.
What settings do I need to add to make this work?
I have tried to read up on ceph fs regarding this issue but I cannot understand =(