Reducing local backup usage

Faris Raouf

Well-Known Member
Mar 19, 2018
147
26
58
Does anybody do local vzbackup VM backups, excluding directories containing lots of user data, and then separately backup that user data to cloud storage?

The above is what I'm thinking of doing, and it would be nice to know someone else is going it, and if there are obvious pitfalls I've missed.

The problem I'm facing is lack of local storage for backups.

We only have two nodes in a cluster, no shared storage.
Each node has 2TB SSD LVM thin for VMs and 2TB of spinning rust for vzbackup backups.

The VMs are backed up from the LVM to the spinning rust on a regular basis using vzbackup.
More than one backup of some VMs are retained, taking up lots of space.
(After backing up, the backups are rsynced to the other node, and also transferred off-site).

We have one particularly big (for us) VM - it is 600GB. We are going to need to increase its size to 800GB.
Unfortunately that's going to cause problems because we won't have quite enough space for the vzbackup backups - we'd have to cut down on how many local backups of some VMs are stored, and I don't want to do that.

So, what I'm thinking of doing is excluding the big user data directories from the vzbackup backups, which will reduce the local backup size by 90%, and then use a third-party file-by-file backup system, which we already use on some VMs for granular recovery, to backup the actual user data directly to cloud storage.

The idea is that in the event of a disaster, we would restore the VM from the local backup, getting it up and running very quickly, then restore the data from the cloud storage.

I regret that Proxmox Backup isn't suitable for us at the moment, even though it would definitely help us reduce the amount of local storage being used.
Its capabilities are great, but the requirement of having a separate system to run it on doesn't work for us (running it in a VM doesn't work for us either, unfortunately).
When it was announced, I was really, really hoping it was going to be a sort of plugin or extension to Proxmox itself, not a completely separate entity.
 
So, what I'm thinking of doing is excluding the big user data directories from the vzbackup backups, which will reduce the local backup size by 90%, and then use a third-party file-by-file backup system, which we already use on some VMs for granular recovery, to backup the actual user data directly to cloud storage.
To do that, add a new disk to the VM and move and mount the directory that you don't want to back up to that disk. You then can uncheck the "Backup" checkbox for that disk (might be in the advanced section) to exclude it from backups.
 
To do that, add a new disk to the VM and move and mount the directory that you don't want to back up to that disk. You then can uncheck the "Backup" checkbox for that disk (might be in the advanced section) to exclude it from backups.
Oh no :-( I thought that I could exclude paths in vzdump, but that's for Containers only, isn't it? I hadn't realised that.
Unfortunately that means my plan is probably doomed. While I can easily move the user data to a separate disk, the original one would still have 600GB allocated. And while the backup of that disk would be small, the "size" in LVM Thin would still be large.

I wish there was a utility that could shrink a disk safely and automatically. It sounds simple in theory - move all data to the start, resize filesystem/partition, resize LVM, resize PV and we are done :-) :-) I know this isn't as easy as it sounds, otherwise it would have been done.
 
If you have "discard" enabled for the VMs disk and run a fstrim -a (assuming a Linux guest), the unused areas should be nulled and vzdump stores nulled areas very efficiently.

Sizing down the old disk needs to be done manually because the procedure highly depends on the guest OS and layout used by it. Could be simple partitions and file systems, could be LVM, ZFS, btrfs, ..... hard to automate and guaranteeing that no data will be lost ;)
 
Why can't you add a 3rd box to run as a separate PBS?
It is mainly the cost, really. A local physical server to run PBS, plus enough local storage, is very expensive.

I suppose I could spin up a Proxmox Backup Server VM at one of the cloud providers that permit custom ISOs installation, which would be cheaper. But then the associated storage would still be expensive in comparison to, for example, using Backblaze B2, and there would be data transfer costs to worry about too, potentially.

The current file-by-file backup system we uses removes all these problems - that's why I like it.

The backup server itself is a very inexpensive VM on any provider. You can use that backup server VM to store backup data, but you don't need to. Instead you can choose to backup direct from the server being backed up to cloud storage instead, with no local storage being used.

When using cloud storage, all that the backup server VM does, essentially, is do the backup scheduling, number crunching for deduplication and incrementing, and other things of that ilk.

On the VMs/physical servers/whatever that is being backed up, there is only a simple backup client installed. When using cloud storage, this client transfers data directly to the cloud storage of your choice - it does not go via the backup server VM at all (i.e. you are not paying for transfer out from your data centre, transfer in to to the backup server VM, transfer out from the backup server VM to the cloud).

It is, quite frankly, a fabulous technology (of course so is PBS!). What it cannot do (on Linux) is a disk image backup that would allow a full bootable restore - it can only backup individual files.

If you are familiar with Jungledisk, what I'm describing is somewhat similar to that, but a lot more advanced and with a lot more control and far more flexibility.

Yes, yes, I know. I'm obsessive about backups. And stupid about backups. And worried about costs. And probably making things more difficult for myself than they need to be.

I imagine that, in the fulness of time, PBS may evolve to include the functionality of the other backup system I'm describing. However, while continues to concentrate on "local" storage (local to PBS), and data transfer from PM node to PBS, it isn't going to be ideal for me.
 
Last edited:
I have a container running pbs and a section of the slower hard disks for backups. But for an offsite I bought a refurb HP i5 for $300.. popped in 2 hard disks for zfs and now I have an offsite backup. You could do the same for a main pbs if you dont have enough resources for a container.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!