Hi good people. My customer wants to create HA virtualization cluster from its hardware. But i need advice, because i have some planning question.
Hardware:
- 3x supermicro servers: 12x 3,5" 8TB 7k2 SATA HDD, 4x 800GB INTEL ENTERPRISE SATA SSD, 2x 8Cores 16vCores (summary 16 Cores, 32 vCores) Xeon CPU, 128 GB RAM
- 10gbps ethernet network between servers using mikrotik switch (now only one switch, but in feature will be two, for network redundancy)
I want to use ceph as storage, but reading ceph manual, i see than for raw HDD capacity of 96TB in each server i need about 100GB RAM for only ceph usage. So there will no free RAM for VMs.
So the question - can i reduce ceph ram usage without performance penalty and cluster stability (i can give about 32GB RAM for cluster FS, to get about 90GB RAM for virtual machines)?
As alternative, i am planning to make lvmraid 10 from HDD, than convert this raid10 to thinpool and use DRBD9 as cluster FS over this thinpool. LVM need a little RAM and DRBD9 i think is not RAM hungry. So i can make 2x thinlmv raid 10 pools from 6 HDD, and 1x thinlvm pool from 4SSD. And install DRBD9 over this 3 pools and sync them with 3 datacopies sheme.
Maybe there are some other variants (ZFS+DRBD9 not variant - ZFS is RAM hungry too, but maybe lvm + glusterfs, or something other)?
Thx.
Hardware:
- 3x supermicro servers: 12x 3,5" 8TB 7k2 SATA HDD, 4x 800GB INTEL ENTERPRISE SATA SSD, 2x 8Cores 16vCores (summary 16 Cores, 32 vCores) Xeon CPU, 128 GB RAM
- 10gbps ethernet network between servers using mikrotik switch (now only one switch, but in feature will be two, for network redundancy)
I want to use ceph as storage, but reading ceph manual, i see than for raw HDD capacity of 96TB in each server i need about 100GB RAM for only ceph usage. So there will no free RAM for VMs.
So the question - can i reduce ceph ram usage without performance penalty and cluster stability (i can give about 32GB RAM for cluster FS, to get about 90GB RAM for virtual machines)?
As alternative, i am planning to make lvmraid 10 from HDD, than convert this raid10 to thinpool and use DRBD9 as cluster FS over this thinpool. LVM need a little RAM and DRBD9 i think is not RAM hungry. So i can make 2x thinlmv raid 10 pools from 6 HDD, and 1x thinlvm pool from 4SSD. And install DRBD9 over this 3 pools and sync them with 3 datacopies sheme.
Maybe there are some other variants (ZFS+DRBD9 not variant - ZFS is RAM hungry too, but maybe lvm + glusterfs, or something other)?
Thx.