Good evening,
Been trying to deploy MDS & CephFS by compiling a couple of very scarce threads from here & there.
But no luck till now. Ceph-deploy tool does not work as expected and couldn't
start the MDS after manually adding the [mds] section etc in ceph.conf
Is MDS and CephFS management in Proxmox a feature we should be expecting soon?
Has anyone successfully deployed CephFS on Proxmox that can give me/us some advice?
* I have a small 3 node cluster with 5x1T OSD's each and my ultimate plan is to deploy CephFS on an erasure coded pool that will act like raid5 between the three nodes, being able to sustain a whole node failure.
** Currently I have replicated pool of two copies, which gives me about 7T of storage, but the data I need to fit in that pool is 9T that's why I need the erasure code
*** Another question: Say all your nodes get fried but you magically manage to save your OSD's.
Is there a way to add/import those OSD's on another fresh/new (or not) cluster and maintain your data?
Kind regards...
Been trying to deploy MDS & CephFS by compiling a couple of very scarce threads from here & there.
But no luck till now. Ceph-deploy tool does not work as expected and couldn't
start the MDS after manually adding the [mds] section etc in ceph.conf
Is MDS and CephFS management in Proxmox a feature we should be expecting soon?
Has anyone successfully deployed CephFS on Proxmox that can give me/us some advice?
* I have a small 3 node cluster with 5x1T OSD's each and my ultimate plan is to deploy CephFS on an erasure coded pool that will act like raid5 between the three nodes, being able to sustain a whole node failure.
** Currently I have replicated pool of two copies, which gives me about 7T of storage, but the data I need to fit in that pool is 9T that's why I need the erasure code
*** Another question: Say all your nodes get fried but you magically manage to save your OSD's.
Is there a way to add/import those OSD's on another fresh/new (or not) cluster and maintain your data?
Kind regards...