Confusing Ceph GUI Info when using multiple CephFS volumes

ikogan

Renowned Member
Apr 8, 2017
41
4
73
40
I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD pools, one using each rule, and a single CephFS using the "capacity" rule.

After upgrading to Pacific the "Create FS" button was still grayed out so I used the ceph fs volume create <name> command on the CLI, which correctly created a new CephFS filesystem and the two new pools to go along with it. I then used the GUI to set the crush rule for those two pools to my "performance" rule. This seemed to all work ok but the UI is reporting that one of my MDS servers is down.

The Ceph documentation mentions that each FS needs it's own MDS and the UI shows that one is "up:active", 3 are "up:standby", and 1 is "stopped". ceph -s shows 2/2 daemons up, 2 standby.

ceph fs dump shows 2 filesystems, each with an active MDS, and 3 standby daemons. One of the active MDSes is listed as the "up:active" and the other as "stopped". This leads me to believe that Proxmox is incorrectly showing the active MDS for the new filesystem as stopped.

Does the Proxmox UI not actually support multiple CephFS filesystems yet?
 
Last edited:
Does anybody know if there are any updates on this? When would we expect Proxmox to support multiple CephFS filesystems on the GUI?

Thanks
 
Does anybody know if there are any updates on this? When would we expect Proxmox to support multiple CephFS filesystems on the GUI?

Thanks
this is already implemented and should be available since november of last year (more specifically since pve-manager 7.0-15)
it seems we simply missed updating the bugreport, i'll go ahead and do that
 
  • Like
Reactions: kyriazis
@dcsapak Was the bugreport updated?

Is the following username format already supported by the system? (7.4 and 8.x running here)

Code:
cephfs_mc@<FSID>.backups