looking for storage guidance with Hyper-Converged Ceph Cluster

benoitc

Member
Dec 21, 2019
173
9
23
I am finally testing the ceph storage on one platform. I have 2x256G m.2 MVME disk and 2x980G SSD drives . I am wondering if it's better to
  • put the system on 1 nvme disk and put the log on the other other one. CEPH would use the 2 SSD disks as storage.
  • Or setup the 2 m.2 disks as zfs error.
I understand that in the first case if the boot disk crash all the system would be unavailable, but in the mean time i have the convergence on the other nodes which would let me the time to change that disk. I wonder what others do (beside the fact i have a really limited storage for ceph ...)

Any feedback is welcome