Hi there,
After the last package update, on a cluster with version 6.1-7 vms failed to come back up due to missing cephfs dependencies. Ceph is reports no problems and health status is green.
However I get:
trying to re-enable the only cephfs storage.
And when I try to create a new one, I get:
I have not upgraded to Nautilus yet. I'm still on Luminous from the 5.x install.
I am not keen on trying to upgrade until I have resolved this issue.
Any ideas on how to get things back to normal?
The only other recent change apart from the upgrade was plugging in an external USB drive with ZFS to use for backups. I have now removed this, eliminating it from any config just in case.
After the last package update, on a cluster with version 6.1-7 vms failed to come back up due to missing cephfs dependencies. Ceph is reports no problems and health status is green.
However I get:
mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details. (500)
trying to re-enable the only cephfs storage.
And when I try to create a new one, I get:
create storage failed: error with cfs lock 'file-storage_cfg': mount error: See "systemctl status mnt-pve-test.mount" and "journalctl -xe" for details. (500)
I have not upgraded to Nautilus yet. I'm still on Luminous from the 5.x install.
I am not keen on trying to upgrade until I have resolved this issue.
Any ideas on how to get things back to normal?
The only other recent change apart from the upgrade was plugging in an external USB drive with ZFS to use for backups. I have now removed this, eliminating it from any config just in case.