Connect and sync two PVEs drives with PBS

xsuau

New Member
Jan 7, 2026
5
0
1
Hello,

We have some questions about how to proceed with our PVE cluster. To provide some context, we are running an HA redundant architecture with two PVE servers and two external disk enclosures for backup disks configured with ZFS RAID. The second PVE server is a copy of the first and together they form a cluster with a third server that is used for HA quorum. The problem lies in the backup process. The cluster launches a backup process for the VM hosted on the current PVE server and stores the data on its respective disk enclosure. We need both backup disks (in each disk enclosure) to contain the same data. Currently, we are doing this with a script and would like to use PBS for this operation.

We will be getting a fourth server that can be used for PBS, but we don't know the best way to proceed. We are considering two possibilities:
  1. Install PBS on the fourth server and connect the two disk enclosures via NFS. Then, we will add the disks in the clusters storage tab.
  2. Install PBS on the fourth server as a centralised controller and install the PBS software on both PVEs. We will then connect the PBSs in the PVE servers to the centralised PBS using the 'Remote' configuration and synchronise both disks.
We prefer the first option, but we are unsure whether this is the correct way to proceed. Are we missing something to enable this configuration, or is there another way?

Thank you in advance — any advice would be much appreciated!
 
Hi xsuau,

I write it in another way to clarify my understanding:
You have 3-node cluster where two are identical production nodes with ZFS replaction (internal drives).
You have PBS as a VM running in PVE which backups other VMs to drives in "external disk enclosure" joined as passtru (PVE don't manage this drives).
Now you are thinking about next PBS for redundancy.

Is this correct?

R.
 
Hi xsuau,

I write it in another way to clarify my understanding:
You have 3-node cluster where two are identical production nodes with ZFS replaction (internal drives).
You have PBS as a VM running in PVE which backups other VMs to drives in "external disk enclosure" joined as passtru (PVE don't manage this drives).
Now you are thinking about next PBS for redundancy.

Is this correct?

R.
Hi

I may have explained myself wrongly, as English is not my first language. I apologize. I will try to rephrase:
We do have a 3-node cluster where two are identical production nodes with two different ZFS-RAIDZ disks (one for each server). The third node is only used for quorum.
We don't have any PBS in the cluster; we want to add a fourth node to get this role in the cluster.
We want to add this fourth node to replace our script that synchronizes the two disks. These disks are used to store backups done using the PVE cluster for our VMs.

Does this explanation help?
 
Though it’s intended for a second PBS, check if it is able to sync one datastore to another.
What do you mean? Are you referring to having the two datastores on the PBS? If so, I haven't been able to connect the remote disks to the PBS. I've tried using an NFS mount point, but that doesn't seem to work either.
 
Hi xsuau,

if you have different ZFS poll in production nodes, then you can't use replication and the 3th quorum node have no sens. I expect you have working replication and this is misunderstanding (I am not english native too :-) ).

Anyway for your question about PBS this isn't important.

I personally not recommend to use NFS drive in PBS. PBS requires ZFS pool for backups. PBS uses data deduplication, delta backup, backup integrity checks based on ZFS abilities. Every of this function means many IOps and network latency makes things slow.

Creating PBS as a VM in cluster isn't good idea. 1) when your cluster fails, then you are not able to restore anything because PBS is in cluster. 2) you can accidentally start backup of PBS VM and this backup should create this backuped VM.

Why you can't attach the enclosure directly to PBS server? Can you describe the enclosure...it can be from USB disk to fiber channel SAN.

My opinion the best option is building PBS server with diks attached inside....you can desing ZFS exactly by your needs. F.e. using combination hdd (data drive) and nvme (cache drive).

R.
 
Hi xsuau,

if you have different ZFS poll in production nodes, then you can't use replication and the 3th quorum node have no sens. I expect you have working replication and this is misunderstanding (I am not english native too :-) ).

Anyway for your question about PBS this isn't important.

I personally not recommend to use NFS drive in PBS. PBS requires ZFS pool for backups. PBS uses data deduplication, delta backup, backup integrity checks based on ZFS abilities. Every of this function means many IOps and network latency makes things slow.

Creating PBS as a VM in cluster isn't good idea. 1) when your cluster fails, then you are not able to restore anything because PBS is in cluster. 2) you can accidentally start backup of PBS VM and this backup should create this backuped VM.

Why you can't attach the enclosure directly to PBS server? Can you describe the enclosure...it can be from USB disk to fiber channel SAN.

My opinion the best option is building PBS server with diks attached inside....you can desing ZFS exactly by your needs. F.e. using combination hdd (data drive) and nvme (cache drive).

R.
Good morning,

Just to put your mind at rest, the production system is currently working with other disks, and the replication of the VMs between prod1 and prod2 is functioning correctly (the system forced us to test this).

Back to the topic at hand:

We are unable to connect the disk enclosure directly to the new PBS, mainly due to contractual limitations. The disk enclosure is connected to the servers using a Mini-SAS High Density to Mini-SAS cable.

This is why we are looking for a workaround for the backup disks, and we were thinking of mounting the external disks as 'Remote' or using NFS.
 
I can imagine you will create ZFS RAID1 on TOP of two NFS drive (one per PVE node) but I strongly not recommend this. If you lose your cluster, you lose backups too.

Take this scenario - how you restore your data when you lose whole cluster (f.e. successful hacker attack)? In this scenario you can't use the enclosure at all (is not operating/available via NFS/erased) -> PBS is not working -> .... what now?

For me backup system should be as independent on backuped systems as possible. Use backup321 rule.
In the worst scenario - Independent PBS -> reinstall PVE cluster -> join PBS to PVE cluster -> restore data

R.
 
I can imagine you will create ZFS RAID1 on TOP of two NFS drive (one per PVE node) but I strongly not recommend this. If you lose your cluster, you lose backups too.

Take this scenario - how you restore your data when you lose whole cluster (f.e. successful hacker attack)? In this scenario you can't use the enclosure at all (is not operating/available via NFS/erased) -> PBS is not working -> .... what now?

For me backup system should be as independent on backuped systems as possible. Use backup321 rule.
In the worst scenario - Independent PBS -> reinstall PVE cluster -> join PBS to PVE cluster -> restore data

R.
First all, sorry for the late reply. I see what you mean. We could add external storage in case the cluster is compromised, and we will do so now. With this improvement, we would have three copies of the VM in two separate systems, with one of those copies outside the cluster.

However, mandatory as it is, this does not solve the sync issue between the two disks. We weren't considering using RAID 1 because we want the two drives to be separate.

Instead, we were thinking of connecting two PBS datastores to the cluster and applying a sync job between them. This raises the questions:
What would happen if one of the datastores failed? Would the backup process start using the second one, or would this change have to be made manually? Can we back up a VM to two folders at once?
 
Last edited:
You can create more datastores in PBS and here you can use sync jobs between datastores. But to PVE you have to join every datastore separately.

So it will not automatically switch backup process to 2nd datastore. So the backup job fails until you manually change storage (target) in backup job in PVE.

You can create two backup tasks to two separate storage - in your case same PBS two different datastore.

Here is one important point to test before you mark this solution as production ready - disconnect the enclosure and verify that the PBS not freeze during backup process and that PBS can boot with one enclosure unavailable.

R.
 
requires ZFS pool for backups. PBS uses data deduplication, delta backup, backup integrity checks based on ZFS abilities. Every of this function means many IOps and network latency makes things slow.
No it doesn't. PBS checksum-based verification, deduplication etc are NOT based on ZFS but their own Implementation as can easily seen by reading the PBS manual. My offsite PBS runs on a vserver with an ext4 datastore.
Did you use a bullshit-as-a-service-provider ( ai ) or where did you get your missinformation?

OP: I would recommend to remove the third node from the cluster and use it as combined qdevice/PBS as described by Proxmox developer Aaron in following thread: Post in thread '2 Node HA Cluster' https://forum.proxmox.com/threads/2-node-ha-cluster.102781/post-442601