Feature Request: advanced restore options in GUI

Dunuin

Distinguished Member
Jun 30, 2020
14,795
4,648
258
Germany
What I'm missing are more advanced options when restoring backups:

1.) a way to set the target storage for each individual virtual disk
Right now you can only choose a single target storage and all virtual disks will be restored to that storage. But often my VMs got virtual disks on different storages. Lets for example say I got a HDD storage for cold data and a SSD storage for hot data and my VM got one virtual disk on each storage. Now when I restore such a VM I need to choose either the big HDD storage or the small SSD storage. So I restore both virtual disks to the HDD storage and then need to move one of that restored virtual disks again from the HDD storage to the SSD storage. This is not only annoying because of the extra step and because I need to somewhere write down which virtual disk belongs to which storage to be able to fix what the restore does wrong, it also means more downtime and maybe more SSD wear as one of the virtual disks needs to be unneccessarily read and written twice.
Another problem now would be this case. Lets say I got two 4TB storages and a VM that got two virtual disks with 3TB each where a virtual disk is stored on each storage. Right now it wouldn't be possible to restore such a VM because non of the storages can temporarily store 6TB. So here it isn't even an option to move the virtual disks later to the correct storage they belong to.

2.) a way to not restore a virtual disk
Lets say my VM got 2 virtual disk. The first is a small virtual disk that stores the OS and the second one is very big and is not always in use and is only manually mounted sometimes. Maybe I want to test something and don't need to access data on that manually mounted big virtual disk. In that case it would be great if I could restore the VM with another VMID with just the small OS virtual disk without the big virtual disk that is only mounted when needed. This again would reduce SSD wear and speed up the restore process as only that is restored that I actually need.

3.) a way to reuse an existing disk without destroying/overwriting it
I already posted a feature request for this: https://bugzilla.proxmox.com/show_bug.cgi?id=3783
To explain it again in short: Lets say I got a VM with two virtual disks. The first one stores my guest OS and the second one just my swap partition. I want to exclude the swap partition virtual disk from the backup because that is only temporary data that can be lost and that isn't well deduplicatable so it wastes alot of space of the PBS datastore.
But how a backup restore of an existing VM now works is that it will first delete the VM with all its related virtual disks. Then it will create a new VM from scratch based on the backup. So the restore will delete both the OS and the swap virtual disks and then only restore the OS virtual disks because the swap virtual disk was excluded from backups. The result will be a VM that won't be able to boot because it now only got the OS virtual disk with the swap virtual disk missing (but which still is required because it should be mounted using fstab).
What I would like to see is an option to tell PVE before restoring to not delete that swap disk and reuse it with the restored VM. So on a restore PVE should destroy the OS virtual disk, but keep the swap virtual disk untouched. Then create only the new OS virtual disk from backup. I then either would see the swap virtual disks as an unused disk that I could manually attach afterwards or it would be great too if PVE could automatically attach that swap disk if it was attached previously (like configured in the old VM config file).


A solution for that, that I personally would like to see are advanced options in the GUIs restore dialog when restoring a VM from the VMs backup tab and the backup storages backups tab, like this:
A list of all virtual disks of that VM (if included in the backup or not) each with a dropdown at the right next to it. This dropdown contains these options:
A.) don't restore, destroy existing vDisk
B.) don't restore, reuse/keep existing vDisk
C.) restore vDisk to Storage MyStorage1, destroy existing vDisk
D.) restore vDisk to Storage MyStorage2, destroy existing vDisk
...

Option A would help with point 2.
Option B would help with point 3.
Option C,D,... would help with point 1.

These advanced option also might be hidden behind a "show advanced option" checkbox like other dialogs in the GUI do it. Would also be great if PVE could read the VMs config file and preselect storage for each virtual disk that was used in the config file. So that If the first vdisk was stored on MyStorage1 and the second vdisk on MyStorage2 that the GUI would preselect "restore vDisk to Storage MyStorage1, destroy existing vDisk" for the first vdisk and "restore vDisk to Storage MyStorage2, destroy existing vDisk" for the second one.

Someone elso would like to see this feature?
Should I create a feature request in the bug tracker or is there already a similar feature request (except for point 3 where I already created it)?

Edit: Not sure if this should be in the PBS or the PVE subforum because it also would be used with vzdump backups. Please move it to PVE subforum if that fits better.
 
Last edited:
I would love to have those features too. :)

Basically we have backup features, where their restore counterparts are actually missing.
  • We can backup disks from different storages, but cannot restore them to different storages.
  • We can exclude disks from backups, but cannot determine on restore that the excluded disks must not get deleted or touched at all.
The ability to restore with only specific disks (and let the other disks untouched) would round this up further.

Edit: For reference:
 
Last edited:
Well, since I just unintentionally wiped/re-created virtual disks for a VM that I restored and lost valuable data this way, I must obviously vote this future request up!
 
a short solution will be a popup with a double annoying warning to confirm that any current existing disks attached to the VM will be deleted if they aren't present in the backup. Detach them prior restore, or restore backup to another id.
 
a short solution will be a popup with a double annoying warning to confirm that any current existing disks attached to the VM will be deleted if they aren't present in the backup. Detach them prior restore, or restore backup to another id.
For VMs, disks that are not in the backup will be kept as unused disks. (EDIT: for completeness, I should mention that the Bus+ID combination e.g. scsi0 is used to detect this).

For containers, there is a big warning already:
CT 121 - Restore. This will permanently erase current CT data. Mount point volumes are also erased.
 
Last edited:
  • Like
Reactions: Tmanok and _gabriel
For VMs, disks that are not in the backup will be kept as unused disks. (EDIT: for completeness, I should mention that the Bus+ID combination e.g. scsi0 is used to detect this).

For containers, there is a big warning already:
CT 121 - Restore. This will permanently erase current CT data. Mount point volumes are also erased.
I just found out about this (setting up my first PVE/PBS environment) and this makes restores of a VM unusable for me. Restoring the VM should not wipe disks that are attached to it and that were excluded from the backup in the first place. This makes a VM backup completely unusable if you mount any disks on it that should not be part of the backup.

Very serious flaw. And I cannot go into production as long as I haven't got my backups sorted, so show stopper for me even (unless I stop using vzdump and PBS altogether and find an alternative)

Is detaching the disk temporarily a workaround for this? Will the restore still work in that case? Because then I can proceed (carefully)
 
  • Like
Reactions: cave
this makes restores of a VM unusable for me
It makes it challenging hardly "unusable".

With how things stand now - just dont restore in-place. Always restore new VMID, then your existing VM is not going to be affected and new VM will not have any impact on it. After the restore you can massage the two VMs to your requirements.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Tmanok
It makes it challenging hardly "unusable".

With how things stand now - just dont restore in-place. Always restore new VMID, then your existing VM is not going to be affected and new VM will not have any impact on it. After the restore you can massage the two VMs to your requirements.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
If a restore will create an extra 500GB disk out of nothing, it is rather unusable. I am wondering what happens if I disconnect scsi1 before a restore, if it will get that scsi1 information from the backup (even if it was explicitly ignored during backup.
 
When using disk passthough it won't wipe the passthoughed disk. It will just remove the scsi1 entry so you have to add that again after restoring the VM. I usually save the command to add the disk in the VMs notes so I just have to copy and paste it into the console.
It will only wipe excluded virtual disks in your VM/LXC storage.
 
Last edited:
@gctwnl the easiest way for you is to create a dummy VM with whatever combination, specifically exclude a 1g disk from backup and then restore any way you see fit. The old rule that "your backups are useless unless you test them" still applies.

In your current config with an LVM based disk excluded - it wont be created out of nothing. The restore will find existing VM, go through its disks, wipe/remove them, restore what is in backup to new disk, in your case just one.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi,
I just found out about this (setting up my first PVE/PBS environment) and this makes restores of a VM unusable for me. Restoring the VM should not wipe disks that are attached to it and that were excluded from the backup in the first place.
this doesn't happen. For VMs, disks that were excluded from the backup will not be wiped upon restore. They will be left as unused disks still containing their data. You can just reattach them after restoring the backup.

Again, the Bus+ID combination is used to detect if a disk is "the same": If scsi0 is in the backup, it will overwrite the scsi0 disk of the VM upon restore. If scsi1 is excluded, it won't end up in the backup. If scsi1 is not in the backup, it won't overwrite the scsi1 of the VM upon restore.

For containers, the situation is different, because the backup is done as a single filesystem, so if a mount point is excluded you will lose it upon restore. There is a warning for this.

EDIT: clarify a bit more.
 
Last edited:
Hi,

this doesn't happen. For VMs, disks that were excluded from the backup will not be wiped upon restore. They will be left as unused disks still containing their data. You can just reattach them after restoring the backup.

Again, the Bus+ID combination is used to detect if a disk is "the same": If scsi0 is in the backup, it will overwrite the scsi0 disk of the VM upon restore. If scsi1 is excluded, it won't end up in the backup. If scsi1 is not in the backup, it won't overwrite the scsi1 of the VM upon restore.

For containers, the situation is different, because the backup is done as a single filesystem, so if a mount point is excluded you will lose it upon restore. There is a warning for this.

EDIT: clarify a bit more.
Thank you. Just so that I am 100% certain, the steps are:
  1. A VM with scsi0 (/) and scsi1 (on some mount point inside /)
  2. Exclude scsi1 from the backup and backup
    1. Only scsi0 will be backed up
  3. Restore the backup
    1. scsi1 is removed from the hardware configuration of the restored VM
    2. scsi1's data is not overwritten
  4. Reattach scsi1 to the VM's hardware
    1. Situation has been fully restored from backup
Correct? So, the nuisance is only minor. It would be nice of the backup would remember scsi1 as 'ignored' and add it on restore to the VM's hardware. But if the above is correct, then there is no problem. I've read one post on teh forum though from someone who said they had turned backup for scsi1 off and they lost their data because it was overwritten on restore. That is what triggered me.
 
Thank you. Just so that I am 100% certain, the steps are:
  1. A VM with scsi0 (/) and scsi1 (on some mount point inside /)
  2. Exclude scsi1 from the backup and backup
    1. Only scsi0 will be backed up
  3. Restore the backup
    1. scsi1 is removed from the hardware configuration of the restored VM
    2. scsi1's data is not overwritten
  4. Reattach scsi1 to the VM's hardware
    1. Situation has been fully restored from backup
Correct?
Yes, as long as the backup you restore does not include a scsi1 disk, the scsi1 disk won't be overwritten. As always, use a test VM to try it out before doing something in production ;)
So, the nuisance is only minor. It would be nice of the backup would remember scsi1 as 'ignored' and add it on restore to the VM's hardware. But if the above is correct, then there is no problem. I've read one post on teh forum though from someone who said they had turned backup for scsi1 off and they lost their data because it was overwritten on restore. That is what triggered me.
The plan is to make the restore dialogue more flexible with regards to disks, and so that it's clear to what happens with each disk. Unfortunately, I have been busy with other stuff and haven't come back around to it yet.
 
The plan is to make the restore dialogue more flexible with regards to disks, and so that it's clear to what happens with each disk. Unfortunately, I have been busy with other stuff and haven't come back around to it yet.
But great to hear that you are working on it :)
 
The plan is to make the restore dialogue more flexible with regards to disks, and so that it's clear to what happens with each disk.

May I kindly ask, if those planned changes are mostly on the informative side or are they also on the functional side, like described/desired in the initial post? :)
 
  • Like
Reactions: Tmanok
May I kindly ask, if those planned changes are mostly on the informative side or are they also on the functional side, like described/desired in the initial post? :)
Yes, functional. With the proposed changes from last time, it would be possible to select which drives should be restored, which should be kept as-is in the config and which should become detached. The patch didn't make it into 7.2 and it still would be based upon Bus+ID (only one restore action for each; not possible to restore scsi0 in the backup as scsi1 or something fancy like that ;)). But this might still change during further development.

And for containers, the situation is much more involved, because of the nested mount structure and because the backup is done as a single filesystem, so it's not as easy to do partial restore or map mount points :/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!