CEPHFS add MDS

a.davanzo

Member
Nov 18, 2020
32
2
13
42
Hello
i want to use PVE with ceph for server cephFS.
actually i've an old ceph cluster build as a virtual that i want to replace with PVE for managing CEPH.

in previus cluster i've 3 MDS, 2 active and 1 stanby, i've a lot of connection to this cephfs.

i don't find a way to add more then 1 active MDS to fspool with PVE, i've read by default on pve it's only one.
it's possibile to increase?
how?

Thanks
 
The official documentation at https://docs.ceph.com/en/latest/architecture/#arch-cephfs seems to suggest that there is one active MDS and additional ones on standby:

The extra ceph-mds instances can be standby, ready to take over the duties of any failed ceph-mds that was active.

And that's what I see here. I have only one CephFS; one MDS is "active" and one is "standby".

In my understanding this is per CephFS. If you have more than one FS there may be multiple active MDS.

Disclaimer: I am not a Ceph expert and may happily proven wrong.
 
The official documentation at https://docs.ceph.com/en/latest/architecture/#arch-cephfs seems to suggest that there is one active MDS and additional ones on standby:



And that's what I see here. I have only one CephFS; one MDS is "active" and one is "standby".

In my understanding this is per CephFS. If you have more than one FS there may be multiple active MDS.

Disclaimer: I am not a Ceph expert and may happily proven wrong.
  • Scalability: Multiple ceph-mds instances can be active, and theywill split the directory tree into subtrees (and shards of a singlebusy directory), effectively balancing the load amongst all activeservers.

Combinations of standby and active etc are possible, for examplerunning 3 active ceph-mds instances for scaling, and one standbyinstance for high availability.
 
  • Like
Reactions: UdoB
In my understanding this is per CephFS. If you have more than one FS there may be multiple active MDS.
Thats exactly right. You can add as many MDS's to the cluster, but their function would be dictated by your policies.

To tell the filesystem what you want to do, set your max_mds variable per FS, like so:

ceph fs set [ceph fs name] max_mds [n]

where [ceph fs name] is the fs and [n] is the number. n should normally not be more then 2- but unless your fs and client load are both of sufficient size you'd be better off just leaving it at one active and one standby.
 
  • Like
Reactions: UdoB
Thats exactly right. You can add as many MDS's to the cluster, but their function would be dictated by your policies.

To tell the filesystem what you want to do, set your max_mds variable per FS, like so:

ceph fs set [ceph fs name] max_mds [n]

where [ceph fs name] is the fs and [n] is the number. n should normally not be more then 2- but unless your fs and client load are both of sufficient size you'd be better off just leaving it at one active and one standby.
thanks a lot
i't worked when i've try i've get error i don't know why but now works
thanks a lot
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!