Separate backups by disks

promoxer

Member
Apr 21, 2023
207
20
18
1682696234749.png

I have 3 disks attached to my VM. Is it possible to back them up separately into different files? Or is it possible for me to get a copy of the disk and directly make a copy of it offsite? I don't want to backup the 200G, the 32G is the OS and 10G is emails, it's cleaner for me to separate them.

Or is there a better strategy?
 
  • Like
Reactions: KonstantiH
You could ignore all the PVE stuff and directly backup a single zvol on the ZFS level. For example by using the "zfs send" command piping that to a "ssh" command and piping that to a "zfs receive" command in case the offsite location is using ZFS too. If the offsite host isn't using ZFS you could pipe the output of "zfs send" into a file and copy that file to your offsite storage. To restore it to a zfs pool you could then download that file, and pipe it to a "zfs receive".
 
Last edited:
  • Like
Reactions: Tmanok
I gave it some thought. The only reason I'm using ZFS is its ability to take backups while the VMs are running.

So, it is unlikely I will have ZFS offsite. This ZFS send will combine everything into 1 big file, that makes it really hard to cherry pick restorations.
 
I gave it some thought. The only reason I'm using ZFS is its ability to take backups while the VMs are running.
Then you aren't really making use of all the great features ZFS could offer you. ;)
LVM-Thin and any filesystem with qcow2 on top and so on would also allow you to take snapshot mode backups.

This ZFS send will combine everything into 1 big file, that makes it really hard to cherry pick restorations.
Jup, it backs up on block level.
 
Last edited:
  • Like
Reactions: Tmanok
Well this LVM qcow2 ZFS is really new to me and I'm still learning. For now, anything I implement must allow me to recover without having to google in panic :)

It's probably what they call "fixing the plane while flying it"

So, is there a way to separate my backups by disk?
 
Well this LVM qcow2 ZFS is really new to me and I'm still learning. For now, anything I implement must allow me to recover without having to google in panic :)
For that you usually create a recovery plan and write it down, while everything is working fine, so you know exactly what do (and you hopefully also tested/trained this) when bad stuff actually happens.
So, is there a way to separate my backups by disk?
Not using the GUI or backup tasks. And I think the vzdump command will also not allow you to do that. Maybe by writing a script that excludes 2 of the 3 disks using the "qm set" command, then running the vzdump command to backup that VM and then using "qm set" again to include those disks.
But Vzdump backups of VMs will do backups on block level too, so you also can't cherrypick single files to restore.

I would do the backup on the guest level (so from within the VM) and only backup the files needed.
 
  • Like
Reactions: Tmanok
Yea, I had a tried and battle tested plan before moving to proxmox. So if things crash on me now, I'm quite screwed.
The main reason I moved to proxmox was the better performance it can provide, but it came attached with a bunch of concepts that are quite foreign to me.

For emails, I don't need it at a file level, but there is a possiblity I need to attach it to another VM, thus the separation requirement.
The 200G is the one that needs to be restored by files, so I have it backed up at guest level already.

I guess my best option for now is to just backup emails and OS together.

Thank you.
 
Last edited:
Hi Promoxer,

You can backup individual disks in the GUI and restore them individually if I'm not mistaken. Then you detach from the guest that's broken and attach to a guest that you prefer. Some of this functionality is improving in the GUI (there are feature requests being actively worked on by the devs right now and you can search the forum for those improvements).

I think Dunuin's suggestions will perform better though. I would recommend that you study PVE and ZFS closely to maximize your infrastructure's capabilities and usage.

When backing up using vzdump what happens is KVM/QEMU backs up with a virtual snapshot while the VM is running, it will reduce your operating performance for that specific VM for technical reasons beyond the scope of this thread. The highest performance and most configurable method is to use ZFS. I would practice a few times to avoid future Google panicks and to improve your overall backup solution to suit your needs.

If you move to CEPH in the future you will notice that it's easier to use the GUI and better for clusters. Another thought that I had was VM cloning and replication which can be done in the GUI but cloning is not automated right now.

Cheers,

Tmanok
 
Hi Tmanok,

1. Thanks, any good links that have accurate and reliable info on ZFS?
2. The nature of my server is not production, thus it can live with limited down time as long as restoration time is capped and not undefined.
3. While a technically superior solution is great, I also need to ensure that whatever system gets adopted can be handled by someone else without extensive training and not by 1 indispensable expert. :)
4. Do you think it is a good idea that I just jump straight into studying CEPH instead?

In any case, all my data is now covered by the backup plan. But I would still love to separate emails from OS if there is a straightforward way.
 
I will spend some time later this week to try and document / support you with this, in the meantime I have two comments and one question. Links in the near future, but help from @Dunuin would be appreciated.

Comment 1
When I recommended that you study ZFS carefully, what I intended was that once it is setup by you, it would also be documented at your site by you. Every location has industry experts and its up to us to teach others in our organizations to get up to date. I've been using ZFS for over 6 years now and PVE in production for about 4. Some businesses that I left switched to another IT department entirely and it was up to them to either learn it or go with a more regionally well-known solution such as VMWare. I've also joined 100% Linux businesses where they had a difficult time finding Linux skilled staff, but we trained our new staff in-house and ended up growing the overall regional community. My point being, anything will become a bit niche- the trick is to avoid building it all yourself from the ground up like a developer would, because as you said others will need to come in an supersede you. ZFS and PVE are not solutions that start terribly close to the ground, fortunately the communities are quite huge these days, but in your case it sounds like you want to stick with GUI wherever possible to minimize onboarding time.

Comment 2
I wouldn't study just CEPH or just ZFS because if you're an IT professional you should at a minimum know ZFS and as a PVE and large virtualization infrastructure user you should consider learning CEPH. If your setup is small (which it sounds to be) I would consider waiting until it is necessary. If you have multiple nodes, ZFS for local and NFS for NAS storage should work fine so long as LXCs can experience downtime during backups. CEPH will improve your experience but add more niche knowledge than ZFS- while both can be used in space or in Antarctica because they are very robust and in the case of CEPH very autonomous, they will eventually need maintenance and someone experienced or trained for both should be used just as you wouldn't let a PC support staff work on a vCenter cluster or VSAN in VMWare.

Question
What do you mean by having a separate email from the OS? Explain your goal and I can probably assist.

Cheers,


Tmanok
 
Alright, wait start here regarding your backup questions. There are some caveats to backing up and restoring still right now but they are being actively worked on by PVE staff: https://forum.proxmox.com/threads/feature-request-advanced-restore-options-in-gui.109707/

The summary from that thread that I understand is:
  1. When backing up you can exclude a VM disk from being backed up. This is a global flag though and would mean that the disk is never backed up. There is a user request for making this per backup job.

    So, is there a way to separate my backups by disk? - Proxmoxer
    Not at this time in the GUI. I was mistaken earlier and Dunuin was correct.

  2. When restoring a backup to a VM with an excluded disk, the restoration will detach the disk and restore (overwrite) the disks that were not excluded from backup.
  3. When restoring a backup to a CT with an excluded mount point, the restoration will erase the mount point data and overwrite the root volume.
  4. "The Bus+ID combination is used to detect if a disk is "the same": If scsi0 is in the backup, it will overwrite the scsi0 disk of the VM upon restore. If scsi1 is excluded, it won't end up in the backup. If scsi1 is not in the backup, it won't overwrite the scsi1 of the VM upon restore."
  5. "For containers, the situation is different, because the backup is done as a single filesystem, so if a mount point is excluded you will lose it upon restore. There is a warning for this."
  6. When you backup a VM with all disks, the restore task will restore ALL disks. In the near future you will be able to backup a VM and all disks but select which ones get restored and which do not.

Practically speaking:
  1. A VM with scsi0 (/) and scsi1 (on some mount point inside /)
  2. Exclude scsi1 from the backup and backup
    1. Only scsi0 will be backed up
  3. Restore the backup
    1. scsi1 is removed from the hardware configuration of the restored VM
    2. scsi1's data is not overwritten
  4. Reattach scsi1 to the VM's hardware
    1. Situation has been fully restored from backup
Because of how LXCs are built, they come with some limitations. Such as being all one file system when backed up and restored and being tied to the host so live migration and virtual snapshots not being possible. Snapshots on ZFS and CEPH are possible making backup highly available (no suspend or shutdown required).

Cheers,


Tmanok
 
Last edited:
"The Bus+ID combination is used to detect if a disk is "the same": If scsi0 is in the backup, it will overwrite the scsi0 disk of the VM upon restore. If scsi1 is excluded, it won't end up in the backup. If scsi1 is not in the backup, it won't overwrite the scsi1 of the VM upon restore."
When restoring a backup to a VM with an excluded disk, the restoration will detach the disk and restore (overwrite) the disks that were not excluded from backup.
According to my memories a restore will wipe the guest with all it's virtual disks firat and then create a new guest with new virtual disks from backup. If an an virtual disk got excluded from backup it won't be created but the existing disk will still be wiped.
But I can test that again to verify.
 
Hi Dunuin,

You're right, that used to be the case but it isn't anymore. Here's a quote from Fiona in the linked thread that you started:
"this doesn't happen. For VMs, disks that were excluded from the backup will not be wiped upon restore. They will be left as unused disks still containing their data. You can just reattach them after restoring the backup." - Fiona

Cheers, and yes of course please test for yourself first using the latest PVE release or at least 7.2+ I believe.

Tmanok
 
@Tmanok

My first priority is to stabilized the setup, anything else can come after that. I'm not against learning, but more a software guy, infrastructure and dev-ops is an add-on rather than a passion. :)

Right now, I have a corrupted ZFS and can't perform backups properly and it presents itself as a new use case to my backup single disk problem.

Generally, I separate content from executable code, for the main purpose to backup only content. My recovery usually involves re-installation as malware/viruses are also my target.

The 2nd use case presented itself when CentOS 8 decided to cut short its life. That was when I had to move my emails to Ubuntu and storing mails on a separate disk made it very simple.

And now, with this bit rot problem, I can lose a few emails here and there, but recovery has to include reinstallation of the OS, to ensure no executable has rotted. In the same vein, this separation also makes this operation easy.

That said, I still have daily backups of the VM themselves (without the content) as a convenience.

I can also confirm that setting backup=no in PVE 7.4, those disks will be detached, but not wiped.
Likewise if we directly passthrough a disk using qm set xxx -scsi..., these become detached and untouched, i.e. not wiped.

Let me try out your steps in a bit after I extract my emails from the corrupted ZFS. Thank you.
 
Last edited:
  • Like
Reactions: Tmanok
@Tmanok

I have some thoughts, we are actually trying to do 2 different things here: backup a VM vs backup a disk

The 1st is already taken care of by PVE

Is it possible for the 2nd to be:

1. Take a snapshot of the disk, `vm-100-disk-1` to "freeze" it in time
2. Run `qemu-img convert...` to turn into a qcow2 for backing up

Or any more efficient commands that can replace the 2 steps with the VM still running. The idea came from https://forum.proxmox.com/threads/get-more-details-of-failed-backup.126699/post-553553

Of course, this qualifies as an option only if there is a way to restore the disk.
 
Last edited:
Hey Proxmoxer,

Fair enough, if you've got it running easily enough with ZFS then great! I've continued the other thread with Fabian to see how far we can push this and then submit a ticket to implement it in the GUI. Flexibility and feature strength are really important to me too.

Cheers,


Tmanok
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!