Hello,
We are planning a new dedicated CEPH cluster + Proxmox cluster (and a Kubernetes cluster in VM) for our clients, who want to switch from regular iSCSI + vmware.
For compute nodes (proxmox) we can reuse the currently used Supermicro superservers and twin servers, ther is no issue with that.
For the storage, currently im planning what HW we need to recommend (we can choose only supermicro servers).
Im planning to use two pool, one normal HDD or SSD (depending on capacity / price) with nvme journal for general usage with 3x replication, and one pool with nvme disks with 2x replication for databases and where it required.
I looked these equipment:
Monitoring nodes (3x):
SuperMicro 1028U-E1CRTP+
- 2x 250GB SSD for OS (RAID1)
- 64GB RAM
- 2x E5-2630 v4 CPU
This type of server have built in 2x 10GB SFP+ port.
OSD (5x for starting):
SuperMicro 1028U-E1CRTP+
- 2x 250GB SSD for OS (RAID1)
- 128GB RAM
- 2x E5-2630 v4 CPU
- AOC-S3008L-L8e HBA
- 8x 1 or 2 TB 2.5" SAS HDD or SSD
- 1.6 TB PCI NVMe
And for nvme osd nodes, two of them:
Supermicro 1028U-TN10RT+
- 2x 250GB SSD OS (RAID1)
- 256GB RAM
- 2x E5-2690 v4 CPU
- 8x 2.5" NVMe drive
- Dual port 40GBs QSFP+ NIC
For networking, i need two data switch with at least 24 SFP+ port (but 48 will be better), and 4 or 6 QSFP+ port, 2 for nvme nodes, 2 for switch interconnection, and two for future use (connecting other switches, etc).
And one regular gbit switch for ipmi and management (ssh, monnitoring traffic whatever), etc.
Im pretty sure the generic osds with hdd is ok, we using similar deployment without any issue (maybe i can use better cpu than 2630), but im really confused with the nvme nodes, i (or anybody in company) dont have experience with it.
Any recommendations?
We are planning a new dedicated CEPH cluster + Proxmox cluster (and a Kubernetes cluster in VM) for our clients, who want to switch from regular iSCSI + vmware.
For compute nodes (proxmox) we can reuse the currently used Supermicro superservers and twin servers, ther is no issue with that.
For the storage, currently im planning what HW we need to recommend (we can choose only supermicro servers).
Im planning to use two pool, one normal HDD or SSD (depending on capacity / price) with nvme journal for general usage with 3x replication, and one pool with nvme disks with 2x replication for databases and where it required.
I looked these equipment:
Monitoring nodes (3x):
SuperMicro 1028U-E1CRTP+
- 2x 250GB SSD for OS (RAID1)
- 64GB RAM
- 2x E5-2630 v4 CPU
This type of server have built in 2x 10GB SFP+ port.
OSD (5x for starting):
SuperMicro 1028U-E1CRTP+
- 2x 250GB SSD for OS (RAID1)
- 128GB RAM
- 2x E5-2630 v4 CPU
- AOC-S3008L-L8e HBA
- 8x 1 or 2 TB 2.5" SAS HDD or SSD
- 1.6 TB PCI NVMe
And for nvme osd nodes, two of them:
Supermicro 1028U-TN10RT+
- 2x 250GB SSD OS (RAID1)
- 256GB RAM
- 2x E5-2690 v4 CPU
- 8x 2.5" NVMe drive
- Dual port 40GBs QSFP+ NIC
For networking, i need two data switch with at least 24 SFP+ port (but 48 will be better), and 4 or 6 QSFP+ port, 2 for nvme nodes, 2 for switch interconnection, and two for future use (connecting other switches, etc).
And one regular gbit switch for ipmi and management (ssh, monnitoring traffic whatever), etc.
Im pretty sure the generic osds with hdd is ok, we using similar deployment without any issue (maybe i can use better cpu than 2630), but im really confused with the nvme nodes, i (or anybody in company) dont have experience with it.
Any recommendations?