We have small cluster based on 10 servers and 2 storages
We are planning to add 8 node supermicro (https://www.supermicro.com/en/products/system/4U/F618/SYS-F618R2-RTN_.cfm)
it will act as ceph server with 2 major pools:
each node will have
my questions:
We are planning to add 8 node supermicro (https://www.supermicro.com/en/products/system/4U/F618/SYS-F618R2-RTN_.cfm)
it will act as ceph server with 2 major pools:
- pool fast based on pcievnme for VM,sqlDB, (based on 4tb 2.5ssds )
- another pool for read intensive file server (based on large files 100MB-4GB each with total of 20-100 TB (we will add ssds in stages )
each node will have
- 2 x 2TB pcie nvme
- 2.5 ( up to six for each vm. total 48 at full capacity on 8 nodes )
- 2 sata doms for proxmox os
- 40gb networking
- 256gb ram 10-12core each cpu (2 cpus)
my questions:
- Do you think the hardware is good fit for ceph? based on our limited budget of 20k$?
- Can i start it with 4 nodes and then add more on demand? (cause now we have only 4 40qsfp ports in the switch and 3 are available. so after ill install 3 nodes and configure the new storage ill deactivate the old storage and configure the 4th node. To work with more the 4 nodes we will have to buy another qsfp switch with will cause us to go over budget.
- Will i be able to mix different sizes of ssds (now the best prices are for 4 tb, but i might get good deals of 8tb ssds later )
Last edited: