[SOLVED] Destroying CephFS storage/pools leaves orphaned mount points that cause multiple issues.


Active Member
Aug 8, 2023
I have been learning how to configure CephFS - as such i was creating storage, cephfs pools and ceph mds / destroying them and recreating them

i found it was really easy to hang gui completely if done in the destory was done in the wrong sequence

i resorted to using pveceph fs destroy ISOs-Templates --remove-pools 1 --remove-storages 1 on one FS to remove all remnants.

I found some issues after this:

  1. the /mnt/pve/ISOs-Templates mount point was not removed
  2. this created interesting errors when i tried to recreate a cephFS with this name (i couldn't create ISOs-Templates because apparently the mount point doesn't exist'
  3. when i use shell and cd /mnt/pve all is ok
  4. if i do ls everything is ok
  5. if i do ls-l the shell hangs
  6. if i try to rm ISOs-Templates i get an error that the target is not a directory
  7. umount -l -f doesn't do anything to help
I have tried stopping all mds and then deleting the mount points
i haven't tried rebooting a node or shutting down the full cluster

any ideas how to fix?
Last edited:
Thanks for the confirmation. I can reboot them one at a time or does the whole cluster need to come down?
you can also try to unmount with umount -l -f /path/to/mount/point
umount -l -f /path/to/mount/point

root@pve1:/# ls /mnt/pve/
bar  cephfs  foo  ISOs-Templates  ISO-Template  test
root@pve1:/# umount -l -f /mnt/pve/foo
umount: /mnt/pve/foo: not mounted.

also if I do ls -l /mnt/pve the command hangs forever
root@pve1:/# ls /mnt/pve/
bar  cephfs  foo  ISOs-Templates  ISO-Template  test
root@pve1:/# umount -l -f /mnt/pve/foo
umount: /mnt/pve/foo: not mounted.
Well, /mnt/pve/foo is apparently not mounted, so not the hanging mount. Since ISOs-Templates is the storage you deleted, I guess you should've tried umount -l -f /mnt/pve/ISOs-Templates.
  • Like
Reactions: gurubert
Thanks Fiona, i had tried that before i posted, it was why i had posted.

I since rebooted all the nodes so no longer have the active repro.
I note there were other posts in the forums over multiple years with the same issue.
If and when I have an active repro I will post again.

Thanks for your help.
Not wanting to necropost but I had this same issue as well. It is not clear how to go about removing a Proxmox created CephFS simply and easily.

There seems to be no clear way to unmount a CephFS that was added as a 'storage' to Proxmox. It would be nice if there was a step by step or guidance in the GUI. 'm not opposed to command line tools but if it was added via GUI it should be able to be completely managed by the GUI, including removing it from all nodes as a mount point, shutting down the MDS servers and then removing the relevant pools.

Indeed, even after rebooting nodes that don't have the 'cephfs' filesystem mounted any more I have 'cephfs (nodename' visible as a storage for each node with a question mark in front of it with no clear way to remove it.


to clear failed storages I just edited /etc/pve/storage.cfg in vi and removed the lines pertaining to it.
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!