Confusing Ceph GUI Info when using multiple CephFS volumes


Apr 8, 2017
I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD pools, one using each rule, and a single CephFS using the "capacity" rule.

After upgrading to Pacific the "Create FS" button was still grayed out so I used the ceph fs volume create <name> command on the CLI, which correctly created a new CephFS filesystem and the two new pools to go along with it. I then used the GUI to set the crush rule for those two pools to my "performance" rule. This seemed to all work ok but the UI is reporting that one of my MDS servers is down.

The Ceph documentation mentions that each FS needs it's own MDS and the UI shows that one is "up:active", 3 are "up:standby", and 1 is "stopped". ceph -s shows 2/2 daemons up, 2 standby.

ceph fs dump shows 2 filesystems, each with an active MDS, and 3 standby daemons. One of the active MDSes is listed as the "up:active" and the other as "stopped". This leads me to believe that Proxmox is incorrectly showing the active MDS for the new filesystem as stopped.

Does the Proxmox UI not actually support multiple CephFS filesystems yet?
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!