Configuring multiple CephFS in a Proxmox VE cluster

emptness

Member
Aug 19, 2022
124
10
23
Greetings!
Please help me figure it out.
I have 2 CephFS configured, I plan to add another one. I see in the Proxmox panel that each CephFS is assigned to a specific MDS daemon.
Do I understand correctly that each CrphFS needs its own separate MDS?
If 3 servers fail, will one MDS daemon ensure the availability of two (or three) CephFS?
I read about Multiple Active MDS, but realized that it is necessary for load balancing and not for high availability. Am I right?

1710927387540.png
 
Why would you want to deploy 5 CephFS in such a small cluster?
This was for an example.
In my actual situation, there are 3 pools of hdd, ssd,nvme. CephFS for hdd and ssd are already deployed. I need to deploy CephFS on NVME.
But if I do this, then if 2 servers out of 4 fail, one CephFS will become unavailable! I'm trying to figure out if there is a way to avoid this, so as not to reduce the availability of services for the sake of another CephFS.
Can I use a configuration with 2 MDS on each server to avoid this?
 
You do not need a new CephFS for different data pools. Lookup directory layout in the documentation.

For what purpose do you need these CephFS? Proxmox only stores ISO images and templates on a shared filesystem.

VM images and containers should use RBD.
 
The fact is that we use CephFS Proxmox to provide access to shared storage inside the VM and external clients (other servers), and not to host the VM and backups on it.
Do you have information: Can I use a configuration with two MDS on the same server that serve different CephFS?