Is it possible: PVE + shared storage + HA + replication to remote site?

alexc

Active Member
Apr 13, 2015
123
4
38
All of my previous experience of PVE setups related to stand-alone hosts, and PVE is perfect for that.

Now we want to connect shared SAN storage to several PVE hosts to have HA available. SAN (we rent it) seems to be FC connected.

At the same time we want to have the same VMs copied (replicated) to remote site in a case primary sitebe down we'll have VMs copy which we'll be able to run.

So, I try to figure out: to have replication working I seems to need snapshots support on storage level, right? But as I see table on https://pve.proxmox.com/pve-docs/chapter-pvesm.html storage that's both shared and snapshot-enabled are Ceph and ZFS over iSCSI.

Ceph is not our case (we don't have so many disks in each of PVE hosts, nor we optimized hosts for that in any way), and we're not good at Ceph so chances are we'll fail that one day.
ZFS over iSCSI sounds better for us, but our SAN about to have FC not iSCSI (and FC seems to be faster).

Please advice, may I miss something so maybe we'll be able to have HA on shared SAN and VM replication at the same time?

P.S. The aim is simple: to have several PVE hosts in cluster at one site, have SAN connected to them, and have one or several hosts on remote site that can receive replication and store it to SAN installed on remote site - so we can tolerate single host fail on first site, or full down of first site.
 

bbgeek17

Active Member
Nov 20, 2020
761
141
43
www.blockbridge.com
You will not be able to use ZFS/iSCSI unless you dedicate another host (or two) to run a Linux/FreeBSD that will provide ZFS, i.e.:
SAN(HA)>ZFS host(HA?)>iSCSI>Proxmox(HA)
(HA?) - you will need to create/maintain your own cluster to failover ZFS and iSCSI services.

The fastest way for you to get storage HA is to look into one of the many Cluster-Aware File systems : https://en.wikipedia.org/wiki/Clustered_file_system#Examples

You will need to install and maintain the required packages directly on Proxmox nodes. The CFS will arbitrate concurrent access and you will most likely use "directory" storage in Proxmox to place your QCOW images there. Note that picking the CFS, managing and installing it - is out of the scope for PVE documentation and support. You'd have to treat it as a completely separate piece of your infrastructure.

Blockbridge is another option. Instead of SAN you would rent two identical storage servers, for example https://www.blockbridge.com/nvme-48-dell-zen3/. Basically, something that can host a lot of disks. Our software would run on top of it and allow you to use Shared storage option with Proxmox via a storage Plugin that we developed and support - https://kb.blockbridge.com/guide/proxmox/.
You can add disks to the servers as you grow, or remove them if your dataset shrinks.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

AlexLup

Member
Mar 19, 2018
215
12
23
40
You should check ceph's rbd image mirror, or possibly gluster as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!