offline offsite backup via zfs

ricotta

New Member
Sep 9, 2024
2
0
1
Hello,

I couldn't find any answer that would satisfy me, so had to create this thread.

Before I can afford an offsite server, I'd like to implement a poor man's offsite backup by copying data to separate HDDs and dropping those to family members located far away (at least 100 km) whenever I visit them.

What I have is PBS in LXC running on ZFS. It's been like that for years, and I'd like to avoid changing that setup.

PBS in LXC does not really support removable datastores, as adding removable disks to LXC seems to be a PITA (if even possible).

So what I was thinking about is to reuse classing zfs datastore cloning, which would go as follows:
  1. Shut down PBS LXC
  2. Snapshot datastores
  3. Start PBS LXC
  4. For each datastore snapshot: zfs send pbs-pool/datastore@snapshot | zfs recv backup-pool/datastore
  5. Remove the original snapshots
I understand that this solution should be self-sufficient, which I define as I could add the backed-up datastore to any PBS instance.

Disadvantages of this solution in comparison to removable datastores or PBS sync jobs would be:
- no control regarding backup depth
- possible garbage (because prune/gc jobs might not have run before snapshotting)
- more prone to human errors

Is all of that right? Is there anything to add?
 
Keeping my PVE installation clean keeps me from doing that.

My setup is a 3-node PVE cluster created and maintained using Ansible, where I try to keep PVE as vanilla as possible and install additional stuff via CT or VM.

Benefits:
- PVE won't get polluted
- it's very easy reshape the cluster (extend, shrink or replace nodes)
- every application is easy to backup and restore
- every application is easy to migrate (in fact I have migrated the PBS to a new node recently this way)

So as for now I implemented exactly the solution I described in the original post as a bash script. The only difference is that I don't remove the snapshots, so my script can levarage incremental zfs send/recv.

Didn't test restore yet, which is the most importatnt part.
 
  • Like
Reactions: Johannes S