I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD pools, one using each rule, and a single CephFS using the "capacity" rule.
After upgrading to Pacific the "Create FS" button was still grayed out so I used the
The Ceph documentation mentions that each FS needs it's own MDS and the UI shows that one is "up:active", 3 are "up:standby", and 1 is "stopped".
Does the Proxmox UI not actually support multiple CephFS filesystems yet?
After upgrading to Pacific the "Create FS" button was still grayed out so I used the
ceph fs volume create <name>
command on the CLI, which correctly created a new CephFS filesystem and the two new pools to go along with it. I then used the GUI to set the crush rule for those two pools to my "performance" rule. This seemed to all work ok but the UI is reporting that one of my MDS servers is down.The Ceph documentation mentions that each FS needs it's own MDS and the UI shows that one is "up:active", 3 are "up:standby", and 1 is "stopped".
ceph -s
shows 2/2 daemons up, 2 standby
.ceph fs dump
shows 2 filesystems, each with an active MDS, and 3 standby daemons. One of the active MDSes is listed as the "up:active" and the other as "stopped". This leads me to believe that Proxmox is incorrectly showing the active MDS for the new filesystem as stopped.Does the Proxmox UI not actually support multiple CephFS filesystems yet?
Last edited: