How do you homelabbers handle backups?

xghozt

New Member
Jul 23, 2025
4
0
1
Hello everyone, was just curious as I often run into this problem and wondering how others handle it. My home server has grown quite large, around 2TB. I'm having a hard time maintaining some semblance of a 3 2 1 backup strategy as my upload speed is abysmal at around 40Mbps (no other options available). Syncing these backups offsite often takes over a week and saturates my network. I've got a simple RAID solution here with redundancy, but that's not ideal as my only backup if something happens to the home lab.

Anyone else struggle with this, come up with any clever ideas?
Maybe I should get desperate and clone my NAS into a cold storage drive I keep in my car? I'm joking, obviously... but am I?
 
Try restic backup, I sync 6TB of data with 50Mbps Upload. The first sync will take time but afterwards it will fly.
Additionally you can use multiple crypted USB HDDs and keep them at friends/family.
 
  • Like
Reactions: thearona
First off, I don't really keep data on my Proxmox server. I keep my data on a separate NAS device. I access all of my data via NFS shares, either directly mounted into a VM via FSTAB or for docker containers, I use the Docker NFS driver for my persistent volumes. As a result, all of my VMs are very small between 32 and 64 gb typically (there is one that is 200gb, but that is a one off). So when I back up Proxmox, I really only back up the VMs, and because they are small I use the built in back up feature, not Proxmox Backup Service. I also back up my VMs to the NAS. I let the NAS handle the "real" backup process. I back up to a couple of other devices in my home lab as well as to R2 object storage on Cloudlfare, and C2 storage from Synology. Synology's Hyper backup application has been very good and very easy for me to use. But when I was running TrueNAS as my main storage device, I used Rsync to an AWS S3 bucket and it was also very good.
 
For VMs and LXCs I have a PBS on a TrueNAS which also provides the storage for lage datapools of my VMs /LXcs (e.G. media libaries). The NAS is also a target for backing up my notebook data with restic. The NAS bulk data is replicated to an external USB drive and a hetzner storagebox as offsite backup.
For an offsite backup of my PBS datastore I run a offsite PBS on a Netcup vserver. At the moment this mixed setup (A Local and netcup PBS for VM and LXCs, restic with a local NAS, external drive and storagebox for offsite backup) is a good compromise of cost and performance/usability.

But I'm in the process of moving my main nas to new hardware, the previous NAS I will keep as "cold storage": The new and old NAS will both run TrueNAS, so the new TrueNAS can replicate it's data to the old one. The advantage is, that I will have now automated daily copies of my NAS data, the replication to the external drive was a manual (and often forgotten) process. I will keep the external drive as additional (although outdated) copy, I will update from time to time.

I'm also considering to move my remote PBS from the vserver to a managed PBS instance. It will cost a little bit more (but tbh 0,02 € per GB or 20€ per TB at Inett is quite fair) but save me the hassle of vserver maintenance.
 
A virtuell pbs on the 2 node cluster with his one ssd. a dedicated pbs server, which only powera on for the sync from the virtuell pbs and for offsite backups I use a hetzner storagebox with restic (or backrestic, when you prefer a webgui), which runs directly in my podman VM.
 
  • Like
Reactions: Johannes S
Hah, thank you all for the insight. Sounds like I'm not entirely alone in dealing with this problem. I think it might be time I add another machine to my rack just to run PBS since it has de-duplication and then I'll look into restic backup or rclone and just deal with the extremely slow syncing. I have never used hetzner but I'll check it out. I usually default to backblaze for cloud storage.

@gtawelt -- how is restic different from just using rclone?
 
Last edited:
I have used both rclone and rsync in different situations. Rclone is good if you want to copy files to the cloud. I used it for Amazon Glacier. The original Glacier, not the newer S3 Glacier. Lately I have been moving my data out of AWS to object storage on Cloudflare R2. Its a ton cheaper.
 
  • Like
Reactions: Johannes S
Please also note, that while restic is great for backing up bulk data and rclone a great sync tool (e.g. for syncing the native Proxmox vzdump *vma Archives which are made if you are not backing up to a PBS) both shouldn't be used to backup or sync a PBS datastore: https://forum.proxmox.com/threads/datastore-synced-with-rclone-broken.154709/

And although there are hacks to mount a hetzner storagebox via sshfs or cifs as PBS datastore the performance is really bad and it's not a good solution at all. That's the main reason why I went with splitting my backups in PBS for vm/lxcs and restic for bulk data
 
Hah, thank you all for the insight. Sounds like I'm not entirely alone in dealing with this problem. I think it might be time I add another machine to my rack just to run PBS since it has de-duplication and then I'll look into restic backup or rclone and just deal with the extremely slow syncing. I have never used hetzner but I'll check it out. I usually default to backblaze for cloud storage.

@gtawelt -- how is restic different from just using rclone?
rclone was primary build for syncing/copying data to cloud storages (S3, gdrive, onedrive and similar) - it will not compress or deduplicate data
restic builds a repository and packs your data into chunks (similar to PBS) - you can still mount a snapshot and extract single data files
 
Please also note, that while restic is great for backing up bulk data and rclone a great sync tool (e.g. for syncing the native Proxmox vzdump *vma Archives which are made if you are not backing up to a PBS) both shouldn't be used to backup or sync a PBS datastore: https://forum.proxmox.com/threads/datastore-synced-with-rclone-broken.154709/

And although there are hacks to mount a hetzner storagebox via sshfs or cifs as PBS datastore the performance is really bad and it's not a good solution at all. That's the main reason why I went with splitting my backups in PBS for vm/lxcs and restic for bulk data
Interesting read. What's the best way to backup a PBS then? If I have a PBS setup and cloned to cloud storage, as long as it's healthy I don't understand why PBS wouldn't boot right up if the config and storage was restored from the cloud. Do people not backup their proxmox backup server?
 
Interesting read. What's the best way to backup a PBS then? If I have a PBS setup and cloned to cloud storage, as long as it's healthy I don't understand why PBS wouldn't boot right up if the config and storage was restored from the cloud. Do people not backup their proxmox backup server?

They do, but one need to differentiate between the backup of the PBS himself (so operating system plus PBS software) and an offsite copy of the stored backups. For the OS and PBS software any backup tool for Linux will do (I like restic, but rsnapshot, rsync or kopia will work too). If you run PBS inside an LXC or a VM you could even use the native vzdump/vma Backup of ProxmoxVE (this is, what I do): Setup the VM to have one virtual disk for the PBS install and another one for the datastore. Then you can configure your backup job, to just backup the VM OS without the data disc (if you have enough space you could also backup the datadisk of course). The resulting vma files you could transfer to your storagebox with rclone or restic.

The offsite copy of the datastores (the actual backups of the VMs and LXCs) is different: For this rclone and co are problematic as you can see in the linked threads. So for that you have following options:
- Use tape backup ( in an corporate environment, where you would store the tapes in another location)
- Setup sync jobs, to sync your backups to an external USB drive als removable datastore (and store if at a family members or friends place) or an offsite PBS (cheap vserver is a budget-friendly solution or using a cloud PBS provider as tuxis.nl or Inett). The offsite PBS could also be setup at a friend or family members place if they have decent Internet bandwidth.
 
Last edited:
I just have a couple of old computers in a by the desk running pbs. These are scheduled to start one after the other, 7pm and 8pm. My home server backups to both every night with scheduled jobs. As long as you the old computer can schedule the start and stop, you're not wasting much energy. Something satisfying about hearing them start and stop automatically knowing your backups have been completed and verified. This doesnt resolve off-site but those risks are magnitudes lower than both machines failing at same time.
 
Last edited:
  • Like
Reactions: Johannes S
I've got a new idea. If PBS is setup to use ZFS storage, then I can generate a snapshot and then rclone that snapshot to s3 cloud storage. I'm going to try it, this should at least prevent storage corruption. Thank you all for the insight, if anyone knows why this wouldn't work I'd be curious to know.
 
For my home data, for a while I wasnt backing much up that was on redundant storage.

However more recently I have started developing a plan to upload the highest priority data to the cloud as a off site full actual backup.

Realistically I wont be backing up every byte on the redundant storage as the cost is prohibitive.
 
I used to have multiple remote PBS to backup to. But PVE connects to them every 10 seconds for the used/available space graphs. I switch to running a PBS in a unprivileged container on every PVE that I run (and backup themselves in snapshot mode without the data storage). Backups from the PVE to the PBS are fast (because they are local) and I sync them (pull and push) between them. Since they are on different systems (and media) and on different continents, I think I have spread my risk accordingly to the 3-2-1 principle (and I don't ping the remote PBS every 10 seconds).

EDIT: I only have a 25Mbit/s between some of the PBS but my backups are less than 160GB in total (and the changes each day are much less) and the deduplication works wonders (even though I use encryption since I don't own most of the PBS servers).
 
Last edited:
  • Like
Reactions: xghozt