Backup swap disks?

Dunuin

Distinguished Member
Jun 30, 2020
14,327
4,206
243
Germany
Hi,

I purged most of my backups and I now I was thinking about excluding my linux swap and windows cache virtual disks from the backups so that this often changing (so not deduplicatable) but unimportant data doesn't fill up my PBS datastore. Especially because I plan to store more backups for many years so backing up swap would accumulate over time.

All VMs are already setup to use a dedicated virtual disk for swap/caching because in the past I put these on a HDD pool to prevent wear on my consumer SSD pool. Meanwhile everything is on the same enterprise SSD ZFS pool or single disk enterprise SSD LVM-thin.

But if I would exclude these cache/swap disks from the backups and would try to restore the VMs I got the problem that the initramfs can't start because this swap disk is setup in fstab but not available anymore.
Last time I tried to restore a already existing VM with excluded disks this was problematic and deleted a existing but excluded disk. So I had no backup (because it was excluded) and also lost the working existing virtual disk because it looks like PVE is deleting the complete VM with all its virtual disks before create a new VM with the same VMID when doing a restore. So its basically wiping all the virtual disks without asking for permissio nto do that when doing a restore.

Is there already a feature request for an option to tell PVE not to delete all disks and just replace existing virtual disks so disks excluded from backups will still remain on the VM storage? This would really help when restoring a backup with excluded disk of a already existing VM. In that case restoring a VM with excluded swap disk wouldn't be a problem because the excluded swap disk would still be existing and could be reused again by the restored VM.

Previously I had no idea to handle this but with PVE7.1 we got 2 new features that could be useful. First it is now possible to protect backups from pruning and secondly we can now easily move disks between VMs using the webUI.

So my idea was to do a initial backup for all VMs but just with the empty swap disk included (so all root/data disks excluded via the VM config). Then I could set a comment like "swap disk only" and enable pruning protection so it won't get auto deleted. After that I could invert the disk settings so all regular backups will only include the root/data disks but swap/cache disks will be excluded.
If I then for example would want to restore a existing VM (lets say VMID100 with a root disk and a swap disk) I would first restore the normal backup as VMID100. This will replace the root disk with the one of the backup and will delete the swap disk. Then I could restore the "swap only" backup as temporary VMID 999, move that swap disk from VMID 999 VM to VMID 100 VM, attach it to my VMID 100 VM and delete the VMID 999 VM again.
If I'm not wrong this should work as a workaround?

And can someone confirm that it s not problematic to loose the swap data? As far as I know with linux swap is just swapped out RAM and RAM is volatile and would be lost anyway with "snapshot" mode backups. And with "stop" mode backups the RAM (and so the swap) should be empty anyway. So I guess this won't be a problem when the swap disk is replaced with a empty one (but with same UUID) when restoring a snapshot. But how is this with Win10 and its swap file? Same as with linux? There I got a 16GB virtual disk partitioned with NTFS that is setup to be used for a 8GB windows swap file and for stuff like my browser cache.

How do you handle backups and temporary data like swap?
 
Hi,
Is there already a feature request for an option to tell PVE not to delete all disks and just replace existing virtual disks so disks excluded from backups will still remain on the VM storage?
Not that I'm aware off.

secondly we can now easily move disks between VMs using the webUI.
Well, CLI for now, WebUI part didn't cut the 7.1 release.

If I then for example would want to restore a existing VM (lets say VMID100 with a root disk and a swap disk) I would first restore the normal backup as VMID100. This will replace the root disk with the one of the backup and will delete the swap disk. Then I could restore the "swap only" backup as temporary VMID 999, move that swap disk from VMID 999 VM to VMID 100 VM, attach it to my VMID 100 VM and delete the VMID 999 VM again.
If I'm not wrong this should work as a workaround?
Yes, but it sounds cumbersome. What's the problem with just re-creating the SWAP disk, not matching partition UUIDs from fstab?
Via CLI/API one could set the serial for a disk and maybe use that for deciding which blockdev is swap.

And can someone confirm that it s not problematic to loose the swap data?
For backups? Yes, those are not problematic there. The VM is cold-started on/after restore anyway, and in that case memory and also SWAP content is irrelevant. For a live-snapshot and then rollback it would be problematic.

Anyhow, this is actually a sensible point that could get improved, from top of my head I could imagine allowing a user to decide if a for backup ignored disk should be re-created or just re-used, the latter would be only possible if restoring over the same VM with a matching drive, naturally needs some closer thoughts about possible implications, but IMO worth a thought.
 
Last edited:
  • Like
Reactions: Dunuin
Not that I'm aware off.
Ok, then I will add it.
Well, CLI for now, WebUI part didn't cut the 7.1 release.
Ok, should do the job too.
Yes, but it sounds cumbersome. What's the problem with just re-creating the SWAP disk, not matching partition UUIDs from fstab?
Via CLI/API one could set the serial for a disk and maybe use that for deciding which blockdev is swap.
Yes, but was the best idea that came to my mind. I thought it would be easier to just move a working swap disk instead doing all the stuff in the guest again. Finding out the right disk, create a partition table, partition it, find out its UUID, edit fstab to match the new swap partition, rebuild the initramfs, reboot, check that swap is working, ... sounds like way more complicated.
Anyhow, this is actually a sensible point that could get improved, from top of my head I could imagine allowing a user to decide if a for backup ignored disk should be re-created or just re-used, the latter would be only possible if restoring over the same VM with a matching drive, naturally needs some closer thoughts about possible implications, but IMO worth a thought.
Jep, that sounds great. In that case I would only need to restore and move the swap disks in case of a full restore to a fresh pool and not every time I just want to test something and revert back afterwards.
And that maybe also prevent other people like me from accidently wiping a virtual disk too. Really didn't came to my mind that a backup restore could wipe a virtual disc that wasn't part of the backup. I thought it would only overwrite the virtual disk I actually backed up.

Edit:
Added a feature request to the bug tracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=3783
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!