Proxmox cluster duel DC and shared storage

Bocanegra_carlito

New Member
Apr 26, 2025
5
0
1
Hello all, new member of the forum here... looking for full help and advise.

I plan to create one Proxmox Cluster.

Our setup is the following:

  • 6 nodes (3 in each DC)
  • each server have 4 network card 25 Gig
I plan to setup the Ceph, so that the storage remains available even if one complete datacenter goes offline. ( 3 nodes of cluster go offline). , i mean : is case of the loss of a DC, the storage is not affected ?

Honestly , I have already done some search in Internet , many person discuss about Proxmox stretched mode.

i'am nobe and this is the first time that i face a task like that, so any help or / and advice will be very appreciated.

Thanks
 
Unless the connectivity between the data centers are your own (e.g. dark fiber), do not go for shared Ceph/storage. It will become your worst nightmare. As Gabriel said, if "something" goes wrong you loose hours or all your data.
 
  • Like
Reactions: Bocanegra_carlito
you need an 3rd DC for an extra ceph monitor + proxmox corosync qdevice. (but you can keep osd on 2 DC only, with ceph stretch mode + replicat4 (2 copies on each DC)
thanks for your reply but what you advice me ? create 3rd proxmox for ceph monitor ? or use ceph stretch mode ? if i should use ceph stretch mode , how i can configure that with replicat4 ?
 
Unless the connectivity between the data centers are your own (e.g. dark fiber), do not go for shared Ceph/storage. It will become your worst nightmare. As Gabriel said, if "something" goes wrong you loose hours or all your data.
thanks for your reply , but how i can guarantee this condition : losing DC dont affect the storage ?
 
Shared Storage over DC/WAN isn't recommended.
Many hours will be lost if something goes wrong.
Keep Ceph local + Highly backups with PBS over DC, only some hours are lost to restore but fire is rare.

thanks for your reply
you mean : i should create two cluster ( cluster in DC01 & cluster in DC02 ) , create ceph cluster in each cluster ? or i can use one cluster with one ceph ?