I wanted to setup a new cluster and have the following constraints again. I will have 3 machines with chassis that can accept either 2x2.5" disks + PCIe or 4x2.5" disks, all machines will have 2x10GbE connections+4x1GbE. Box are chosen mainly to minimise the noise and place in the office). The system will be used for db storages.
In previous (and actually running) config I had setup a ceph cluster on 2 disks while the OS is running on 2x NVMe PCIe disks but i am thinking ceph is not designed for it, at list the cost per GB is maybe too high.
So I am thinking to only use local storage for vms and for HA use 2 NAS (1 in failover) over iSCI with 2x10GBe each. The question I am asking myself is if I should buy 4 SSD disks and use them for local storage with zfs . Or should I try to separate storages and have the default systelm on NVMe ZFS raid1 and SSDs on another zpool?
In the last case would you still create a raid ?
In previous (and actually running) config I had setup a ceph cluster on 2 disks while the OS is running on 2x NVMe PCIe disks but i am thinking ceph is not designed for it, at list the cost per GB is maybe too high.
So I am thinking to only use local storage for vms and for HA use 2 NAS (1 in failover) over iSCI with 2x10GBe each. The question I am asking myself is if I should buy 4 SSD disks and use them for local storage with zfs . Or should I try to separate storages and have the default systelm on NVMe ZFS raid1 and SSDs on another zpool?
In the last case would you still create a raid ?
Last edited: