Hello PROXMOX forum!
I am working on an infrastructure setup for a small-medium business. We are confused about the networking topology and my research hasn't given me a great answer. We are looking at a 3-node setup with mesh networking for Ceph (internal and public networks) and corosync. We mostly want to go the mesh route so that we don't have to buy 25Gb switches. I have read the PROXMOX docs on mesh networking, hyperconverged, etc. but the implementation details and real-world data seems to be lacking.
Our current hardware plan:
Can anyone help me with some real-world implementation details/results and thoughts?
Further info: We do not expect to outgrow a 3-node cluster, plus we can always upgrade these nodes since we are buying pretty low specced hardware. Our current infrastructure is a 2-node vSphere cluster running StarWind VSAN with 2x 25Gb ports direct connected between nodes. CPU, RAM, and storage is nearly the same as noted above, just jammed into 2 nodes instead of 3. About 25 VMs: file servers, domain controllers, database servers, application servers, web servers. CPU is typically <10%, RAM usage is typically ~50%, storage is ~50% used, networking utilization is low.
I am working on an infrastructure setup for a small-medium business. We are confused about the networking topology and my research hasn't given me a great answer. We are looking at a 3-node setup with mesh networking for Ceph (internal and public networks) and corosync. We mostly want to go the mesh route so that we don't have to buy 25Gb switches. I have read the PROXMOX docs on mesh networking, hyperconverged, etc. but the implementation details and real-world data seems to be lacking.
Our current hardware plan:
- Dell R7xx hosts.
- Each with a single ~20-core CPU.
- Each with 128GB RAM.
- Each with 5x 1.92TB SSDs.
- Each with 4x 25Gb ports.
- Each with 4x 10Gb ports (Base-T, ethernet).
- Each with 2x 1Gb ports.
- 2x 25Gb ports for Ceph internal traffic (DAC Cables, Mesh: node A->B, B->C, C->A).
- 2x 25Gb ports for Ceph public traffic (cabled as above). Possibly also VM migration traffic.
- 2x 10Gb for VM traffic to LAN.
- 2x 10Gb for management traffic (also for corosync backup).
- 2x 1Gb for corosync (Mesh: node A->B, B->C, C->A).
Can anyone help me with some real-world implementation details/results and thoughts?
Further info: We do not expect to outgrow a 3-node cluster, plus we can always upgrade these nodes since we are buying pretty low specced hardware. Our current infrastructure is a 2-node vSphere cluster running StarWind VSAN with 2x 25Gb ports direct connected between nodes. CPU, RAM, and storage is nearly the same as noted above, just jammed into 2 nodes instead of 3. About 25 VMs: file servers, domain controllers, database servers, application servers, web servers. CPU is typically <10%, RAM usage is typically ~50%, storage is ~50% used, networking utilization is low.