Hello,
I'm in phase of studying the best architecture for our future setup of Proxmox servers in 2 datacenters, and I would like maximize the chance to setup it the correct way, avoiding later modifications.
So if anyone can advise me on what could be the best for this :
Franck
I'm in phase of studying the best architecture for our future setup of Proxmox servers in 2 datacenters, and I would like maximize the chance to setup it the correct way, avoiding later modifications.
So if anyone can advise me on what could be the best for this :
- I'm setting up 2 datacenters with 2 HPe DL380 Gen 9 servers in each for a total of 4 servers, in first phase
- A VPLS link (L2) of dedicated 1Gb will interconnect those DC
- Each server has 64 GB RAM, 1x 600GB SAS HDD, 1x 1.9 TB SSD, 8x ports 1Gbps
- Each datacenter has it's own NAS (TrueNAS Scale) with 6x 2 To SATA HDD + 6x 2To SATA SSD
- 2 cisco switches on each DC :
- one dedicated to storage, as each server & NAS will have a bond of 2x 1Gbps using LACP
- one dedicated for mgt, dmz, inside and Internet trafic
- 2 OPNSense firewalls in each DC configured in HA.
- A full /24 IP scope
- I surely want to setup clusters/HA but would it be best to use a single cluster with all 4 servers shared on the 2 DCs ? Or a local cluster with 2 servers on each DC ? or 2 clusters, splitted on both DC : server1-dc1 + server1-DC2 in a cluster, server2-dc1 + server2-dc2 in another cluster ?
- What happen in each scenario if VPLS link is lost : split brain ? Should I setup also an IPSec tunnel for clusters between DC for securing cluster exchanges ?
- For storage, should I use Ceph, ZFS pools or Iscsi : internally between servers ? with NAS ?
- later, I will join additionnal servers migrated from another DC which is closing. Final state include 3 servers in DC1 and 4 servers in DC2.
- a single cluster with all servers in it.
- Boot disk on the 600GB HDD,
- I hesitate between Ceph/ZFS for the SSD knowing that additionnal servers will not have SSD at first step, but would certainly be upgraded for it. So initially they can't participate Ceph/ZFS pool (because mix HDD/SSD) ? Should the Ceph/ZFS pools stay local (I presume yes) or shared between DC and/or include disks from NAS ?
- I would probably use NAS HDD for backups, SSD for iSCSI for non-essential VMs.
- Should I route over my VPLS network or let it L2 ? Same for IPSec Tunnel : GRE ou routed ?
Franck