Recommendations for docker swarm proxmox cluster

renatopinheiro

New Member
Apr 17, 2020
5
0
1
33
Hi,

I have a docker swarm stack to deploy, needs to be HA, so my search ends up with a 3 Proxmox server in cluster + Ceph cluster solution(diagram in attach).
Each node will have VM with docker swarm running:
  • 2x EMQ X brokers - 2 cores 16GB RAM each (4 cores 32GB RAM)
  • InfluxDB - 4 cores 8-32GB RAM 1TB storage
  • Telegraf - 2 cores 8GB RAM
  • MongoDB - 4 cores 16GB RAM 1TB storage
  • 3x Restify API - 1 core 4GB RAM each (3 cores 12GB RAM)
  • Traefik - 2 cores 8GB RAM
  • Monitoring - 2 cores 8GB RAM(Only in one node)
  • Logging - 2 cores 8GB RAM(Only in one node)

So I need your opinion about this solution or better alternatives.
I don't have any experience with server and network hardware so I also need recommendations about what hardware chooses for the servers, switches, and firewalls.

proxmox_ceph_cluster.png
 
Last edited:
Hi,

If you like to make a 3 node Hyperconvert Ceph Cluster I would recommend you 25GBit NICs for the Ceph network instead of 10GBit.
You may not need the Bandwith but you will have the lower latency of the 25GBit. Ceph will profit from low latency enormous.
Also, use Enterprise SSD for OSD's or even better Enterprise NVMe.
 
If you like to make a 3 node Hyperconvert Ceph Cluster I would recommend you 25GBit NICs for the Ceph network instead of 10GBit.
You may not need the Bandwith but you will have the lower latency of the 25GBit. Ceph will profit from low latency enormous.
If Ceph profits considerably by low latency then network should be Infiniband since Infiniband has almost no latency whatsoever.
 
Ceph does not support Infiniband direct, only RDMA is possible.
Anyway, with 25GBit the latency to Infiniband is nearly the same.
Proxmox VE does not support RDMA.
 
Hi,

If you like to make a 3 node Hyperconvert Ceph Cluster I would recommend you 25GBit NICs for the Ceph network instead of 10GBit.
You may not need the Bandwith but you will have the lower latency of the 25GBit. Ceph will profit from low latency enormous.
Also, use Enterprise SSD for OSD's or even better Enterprise NVMe.

If Ceph profits considerably by low latency then network should be Infiniband since Infiniband has almost no latency whatsoever.

Thanks,

Something like this?
https://www.dell.com/en-us/shop/mel...er-install/apd/406-bblc/networking#polaris-pd

Any recommendations for servers? Budget between 6-10k for the 3 is acceptable?
 
Server - Refurbished Dell PowerEdge R630 8-Bay 2.5" 1U Rackmount Server
CPU - 2 x Intel Xeon E5-2680 v3 2.5GHz 12 Core 30MB 9.6GT/s 120W Processor SR1XP
Memory - 128GB (16x8GB) DDR4 PC4-2133P ECC Memory
RAID Controller - None
2.5" Drives - 2x Dell 1.92TB SAS 2.5" 12Gbps RI Solid State Drive
External Storage Controller - Dell 12Gbps PCI-E SAS HBA Controller
Network Daughter Card - Dell Dual Port 10GbE + Dual Port 1GbE NDC | Broadcom 57800-T
Ceph Network Controller - Mellanox ConnectX-4 Lx Dual Port 25GbE (This works with Ceph?)
Power Supplies - 2x Dell PowerEdge 13th Gen 750W 80+ Platinum AC Power Supplies

Is it good hardware for the servers?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!