nodes eventually lose access to CephFS and have to unmount

alyarb

Renowned Member
Feb 11, 2020
142
30
68
38
15 node cluster, just upgraded all to 6.4-13. Did not have this problem with previous 6.x PVE.

I can always "fix it" by running umount /mnt/pve/CephFS but within an hour or so, at least 4 or 5 of the nodes will lose it again.

On the PVE GUI the error is "unable to activate storage 'CephFS' - directory /mnt/pve/CephFS does not exist or is unreachable (500)"

In bash, the error is "-bash: cd: /mnt/pve/CephFS: Permission denied"

thanks for reading
 
Last edited:
update:

rebooted all nodes Friday and by monday the CephFS mounts have been lost again
 
can you post the journal during a time where this happens (so from a time where it's working until a time where it's not)?