Hi,
I have a small three node homelab PVE cluster running. Each of the PVE nodes is also a CEPH node and each node houses one NVME OSD and one HDD OSD. The three NVME OSDs form a pool and so do the three HDD OSDs.
The idea was to have HA for my VMs, i.e. that if one node fails, the VMs can continue running on one of the other two nodes.
But now I am having doubts whether that will work because it seems that if one CEPH node/OSD fails, the other two CEPH nodes/OSDs are not enough to keep the lights on.
Is my impression wrong or would I be better off getting rid of CEPH again? Or is there another feasible course of action for me (at the moment I don't want to add more drives)?
Thanks!
I have a small three node homelab PVE cluster running. Each of the PVE nodes is also a CEPH node and each node houses one NVME OSD and one HDD OSD. The three NVME OSDs form a pool and so do the three HDD OSDs.
The idea was to have HA for my VMs, i.e. that if one node fails, the VMs can continue running on one of the other two nodes.
But now I am having doubts whether that will work because it seems that if one CEPH node/OSD fails, the other two CEPH nodes/OSDs are not enough to keep the lights on.
Is my impression wrong or would I be better off getting rid of CEPH again? Or is there another feasible course of action for me (at the moment I don't want to add more drives)?
Thanks!