Hello everyone, I am in a weird situation with one of the clusters I administrate regarding storage configuration.
I used to have an NFS storage with shared files among all the PVEs in the cluster. As I decided to migrate to Ceph I just installed it, configured everything and created a CephFS and a pool to store all the data.
After that, I just copied every single file from the "old" NFS to the new CephFS planning to replace this storage, and use the same name for it. As we have automations that run against clusters, it's key to have standard storage names. Now I am in this situation:
CephFS was created with the "wrong" name. Creating a second CephFS is not recommended and experimental as far as I know.
I am looking for a workaround to fix this issue without losing data. Has anyone tried creating a "Directory" storage type that has as target a CephFS mount point? Does it make any sense or could it create some kind of problem?
Thank you.
I used to have an NFS storage with shared files among all the PVEs in the cluster. As I decided to migrate to Ceph I just installed it, configured everything and created a CephFS and a pool to store all the data.
After that, I just copied every single file from the "old" NFS to the new CephFS planning to replace this storage, and use the same name for it. As we have automations that run against clusters, it's key to have standard storage names. Now I am in this situation:
CephFS was created with the "wrong" name. Creating a second CephFS is not recommended and experimental as far as I know.
I am looking for a workaround to fix this issue without losing data. Has anyone tried creating a "Directory" storage type that has as target a CephFS mount point? Does it make any sense or could it create some kind of problem?
Thank you.