Geo split cluster?

alexc

Renowned Member
Apr 13, 2015
125
4
83
I have several servers rent in two datacenters. This is done to overcome DC outages (which is rare case but still happens), and so we'd like to have data from one DC to be copied to second one so we can use it to run reserved VMs with that data.

The data itself is Postgresql databases and todaye all we can afford is doing dumps or db binary backups and rsync it to second DC. Easy to understand scheme but quite dumb and time-consuming. Dump takes time and CPU power, rsync takes time and bandwidth, and the time delta is quite big.

What we'd like to test is to create PVE cluster, put DB server VM on ZFS volume and employ ZFS sync.

Sound nice and quite magical, but will this work? We can afford 1Gb internet links in both DC, but no direct 10Gb interconnect link. Postgres is ot too busy (so hope disks are not changing too fast), but this is the hope, not the fact.

Please advice, if we can rely on that approach, and, moreover, will it be useful for DB VM? DB keeps a lot of data in the RAM, so replicating disks may not be the silver bullet?
 
If you like to run a cluster acress datacenters you need darkfiber. There you can use CWDM. You will need a lot of links - think about storage traffic.
 
zfs sync would decrease overhead a little, but overall its the same procedure as before.

do you need ha ? should vm's automatically start on node2 if node1 fails ?

for ha other methods would be better suited, if its just as backup its fine.
 
If you like to run a cluster acress datacenters you need darkfiber. There you can use CWDM. You will need a lot of links - think about storage traffic.
Nice thing to have, but out of budget for us.
 
zfs sync would decrease overhead a little, but overall its the same procedure as before.

do you need ha ? should vm's automatically start on node2 if node1 fails ?

for ha other methods would be better suited, if its just as backup its fine.
Actually no, HA is not a priority for us, we can afford as long as 1 hour before run the backup VMs in different DC which is plenty for check the rsynced data copy and restore one on backup DB VM.

But what can we use in out situation other that zfssync? Looks like CEPH or DRBD may use even more resources and won’t be happy with inter-dc delays?
 
Last edited:
Actually no, HA is not a priority for us, we can afford as long as 1 hour before run the backup VMs in different DC which is plenty for check the rsynced data copy and restore one on backup DB VM.

But what can we use in out situation other that zfssync? Looks like CEPH or DRBD may use even more resources and won’t be happy with inter-dc delays?

the delay is quite troublesome with ceph but it can be tuned for it, it will take quite a performance hit on your storage tho.

in your case zfs sync is totally fine, just make sure the interval is easily done in time and does not overlap.

do some tests with */15, */10 and */5, run it manually with timeout to see how long it takes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!