Hi all,
I've done a bit of reading here and on r/homelab but haven't quite found the answer i'm looking for.
I've just bagged myself 3 Dell R210ii's all with the (H200?) SAS card fitted & 32GB Ram which i'm planning on setting up in a 3 node cluster. This is my first forray into the world of homelabs / clustering and i intend to run various VM's for my energy monitoring system, homeautomation, pi-hole, domain controller (when i get round to it) and i will likely experiement with various windows / linux vm's too.
I'm liking the idea of mirrored SSD for the OS install, and then probably 2 SSD per node with CEPH for the VM's & data. What i'm looking for guidance on at the moment is which way to configure the drives / CEPH network.
So, as i see it i have 2 options:
1) Attach the SSD to the SAS card in JBOD / IT mode, and run CEPH over one of the inbuilt gigabit NIC's
2) Attach the SSD to the internal SATA-2 ports, and source some 10Gbe cards to run CEPH over - probably initially in a peer-peer network
Option1) prioritises local VM/data disc performance over the CEPH network performance, whereas Option 2) is the other way round.
What i have no feel for is which option will give me the best performance within the constraints of what i have. Any pointers welcome.
I've done a bit of reading here and on r/homelab but haven't quite found the answer i'm looking for.
I've just bagged myself 3 Dell R210ii's all with the (H200?) SAS card fitted & 32GB Ram which i'm planning on setting up in a 3 node cluster. This is my first forray into the world of homelabs / clustering and i intend to run various VM's for my energy monitoring system, homeautomation, pi-hole, domain controller (when i get round to it) and i will likely experiement with various windows / linux vm's too.
I'm liking the idea of mirrored SSD for the OS install, and then probably 2 SSD per node with CEPH for the VM's & data. What i'm looking for guidance on at the moment is which way to configure the drives / CEPH network.
So, as i see it i have 2 options:
1) Attach the SSD to the SAS card in JBOD / IT mode, and run CEPH over one of the inbuilt gigabit NIC's
2) Attach the SSD to the internal SATA-2 ports, and source some 10Gbe cards to run CEPH over - probably initially in a peer-peer network
Option1) prioritises local VM/data disc performance over the CEPH network performance, whereas Option 2) is the other way round.
What i have no feel for is which option will give me the best performance within the constraints of what i have. Any pointers welcome.