My backup strategy involving files directly and rsync.net doesn't seem possible in ProxMox. How to solve it?

kikao

New Member
Apr 5, 2022
5
0
1
Before ProxMox, I had one Ubuntu server running a collection of applications using docker-compose, LXC, and VMs.

Each docker-compose service had a "backup" container that would take backups (whatever that meant for the application) and put the backed up files on rsync.net using borg/borgmatic.

For VMs, same thing: I'd have borgmatic run on a cron to do whatever taking a backup meant (stopping the database etc) and put it on rsync.net

The result was neat because I had actual files visible on rsync.net backups and it's very nimble because I'd only save the actual files and data that needs backing up instead of the whole disk (on which 90% of the data is just the OS files which can be reinstalled from an ISO + Ansible playbook/Dockerfile)

My genius idea was to have ProxMox run a ZFS mirrored pool, which would contain a `backups` dataset. And in that dataset, each VM/CT would have its own subdirectory. This subdirectory would be mounted inside the VM/CT and backups dumped on it. Then I'd have a CT or a cron on ProxMox directly send these to rsync.net once a day using borg. This would save me from configuring borgmatic in every single VM/CT I wanted to backup, I wouldn't have to deal with dozens of public keys on rsync.net, and the backup process would be more transparent to the VM/CT.

Except that I can't easily mount a directory inside VMs and CTs. I thought it would be as simple as with VirtualBox's shared folders and Docker's volumes. But it's not at all. And I don't want to deal with hardening NFS just to expose the `backups` dataset's subdirs to each VM and CT.

I could create a small backup disk for each VM/CT and have that stored on the `backups` dataset, which then gets backed up to rsync.net from ProxMox. But that's opaque (I can't easily see what files exactly are in each backup, only the VM/CT disk file) and I'm not sure how well it would dedup with borg either.

Where does that leave me? How could I make this backup system work for me so that:

- the process is mostly transparent to the VM/CT; all it has to know is how to provide the files to backup
- isn't opaque, so I can see the actual files contained in each rsync.net/borg backup
- works as it should with borg's deduplication mechanism
- doesn't involve NFS or SMB because hardening is very hard with the former and messy with the latter
 
Last edited:
Short: You cannot and should drop rsync and borg completely if you use ZFS.

ZFS has everything you need with ZFS snapshots that are sent to another host. You can access all files in each snapshot directly, the process is much faster and smaller than any rsync backup. ZFS is also capable of compression and encryption, so that your files are also save.
 
Short: You cannot and should drop rsync and borg completely if you use ZFS.

ZFS has everything you need with ZFS snapshots that are sent to another host. You can access all files in each snapshot directly, the process is much faster and smaller than any rsync backup. ZFS is also capable of compression and encryption, so that your files are also save.
Borg uses its own protocol afaik and rsync.net supports it. They also support rsync, scp, zfs send and many other ways. It’s just their name :)

Even if I used zfs send, my problem is the same: I’d zfs send the underlying file system but my containers and VMs would store their files in disk images. When looking at the backups, trying to extract one particular file within a VM, or if restoring to another system, I’d have to deal with disk images and not directly the files they contain. Unlike now where the actual files are in the borg backups and I can get to them individually without needing to pull down a large disk images which I then need to inspect to get to the actual file.
 
Borg uses its own protocol afaik and rsync.net supports it. They also support rsync, scp, zfs send and many other ways. It’s just their name :)
Oh perfect that they also support zfs send/receive.

Even if I used zfs send, my problem is the same: I’d zfs send the underlying file system but my containers and VMs would store their files in disk images. When looking at the backups, trying to extract one particular file within a VM, or if restoring to another system, I’d have to deal with disk images and not directly the files they contain.
Containers store their data inside of a ZFS dataset (if using ZFS as their underlying storage of course) and you'll have exactly what you want. If you can, stop using KVM/QEMU VMs and use only containers if this file-based-restore approach is important. If you use regular zfs snapshots, you can access previous files via zfs snapshot directory and may not need your backup for restoring single files at all. We use this heavily in combination with Samba for ordinary users over the Windows file history tab and it reduces the backup work the IT department has to do a lot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!