I will convert my 2-node-cluster with local storage to a 3-node-cluster with CEPH.
The 3 nodes will look like this:
Supermicro chassis with 8 HotSwap Bays
Xeon 4110 or 4215R
Supermicro Mainboard X11SPi-TF (with 2x 10 GBit onboard)
128 GB RAM
BROADCOM HBA 9400-8i
2x 1 GBit PCIe card
2x 480 GB SSD for Proxmox OS on ZFS-RAID1
3x 2 TB Intel D3-S4610 for CEPH
Now I have 2 main questions left:
1. I am currently using 1 GBit NIC for LAN and 1 GBit NIC for DMZ.
The 10 GBit ports should be used for meshed network for CEPH (without a switch).
Do I really need another separate networks for CEPH?
2. Will 3x 2TB be enough for CEPH? This is 9 disks in total, but I read about a recommendation of minimum 12 disks?
Do I need any extra disks for DB/WAL or Bluestore??
The 3 nodes will look like this:
Supermicro chassis with 8 HotSwap Bays
Xeon 4110 or 4215R
Supermicro Mainboard X11SPi-TF (with 2x 10 GBit onboard)
128 GB RAM
BROADCOM HBA 9400-8i
2x 1 GBit PCIe card
2x 480 GB SSD for Proxmox OS on ZFS-RAID1
3x 2 TB Intel D3-S4610 for CEPH
Now I have 2 main questions left:
1. I am currently using 1 GBit NIC for LAN and 1 GBit NIC for DMZ.
The 10 GBit ports should be used for meshed network for CEPH (without a switch).
Do I really need another separate networks for CEPH?
2. Will 3x 2TB be enough for CEPH? This is 9 disks in total, but I read about a recommendation of minimum 12 disks?
Do I need any extra disks for DB/WAL or Bluestore??