nodes eventually lose access to CephFS and have to unmount

alyarb

Well-Known Member
Feb 11, 2020
140
25
48
37
15 node cluster, just upgraded all to 6.4-13. Did not have this problem with previous 6.x PVE.

I can always "fix it" by running umount /mnt/pve/CephFS but within an hour or so, at least 4 or 5 of the nodes will lose it again.

On the PVE GUI the error is "unable to activate storage 'CephFS' - directory /mnt/pve/CephFS does not exist or is unreachable (500)"

In bash, the error is "-bash: cd: /mnt/pve/CephFS: Permission denied"

thanks for reading
 
Last edited:
update:

rebooted all nodes Friday and by monday the CephFS mounts have been lost again
 
can you post the journal during a time where this happens (so from a time where it's working until a time where it's not)?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!