Feature Request: advanced restore options in GUI

Yes, functional. With the proposed changes from last time, it would be possible to select which drives should be restored, which should be kept as-is in the config and which should become detached. The patch didn't make it into 7.2 and it still would be based upon Bus+ID (only one restore action for each; not possible to restore scsi0 in the backup as scsi1 or something fancy like that ;)). But this might still change during further development.

And for containers, the situation is much more involved, because of the nested mount structure and because the backup is done as a single filesystem, so it's not as easy to do partial restore or map mount points :/

Awesome! Really happy to hear/read that. Thank you very much for the explanation. :)
 
May I humbly suggest adding these options on a PER BACKUP basis, rather than a per VM basis?

There are certainly times when someone may wish to include only one disk in (for example) their daily backups, but include all disks in the weekly backups.

Currently I just use chron and rsynch to grab those excluded disks, but a way to do it from within PVE would be fabulous!
 
The ability to restore with only specific disks (and let the other disks untouched) would round this up further.
Absolutely! I had a situation today where the boot drive of a VM corrupted and I had to restore a entire VM to a different VMID to then reassign the drive. That VM has 4 drives and I had to restore over 2tb of data just for the one 64Gb boot drive.
 
Absolutely! I had a situation today where the boot drive of a VM corrupted and I had to restore a entire VM to a different VMID to then reassign the drive. That VM has 4 drives and I had to restore over 2tb of data just for the one 64Gb boot drive.
Similar situation here.

My last restore took almost a day to complete...
 
+1

Having more robust selection control during backups and restores sure would be welcome.
 
May I humbly suggest adding these options on a PER BACKUP basis, rather than a per VM basis?

There are certainly times when someone may wish to include only one disk in (for example) their daily backups, but include all disks in the weekly backups.

Currently I just use chron and rsynch to grab those excluded disks, but a way to do it from within PVE would be fabulous!
Huh, I never considered that. Great point! Currently, all the VM disks I exclude are too big for backups (~5TB) and live on "safe" (replicated) storage, never occurred to me that you might want some disks more frequently backed up than others. +1 to this.
 
Hi,

this doesn't happen. For VMs, disks that were excluded from the backup will not be wiped upon restore. They will be left as unused disks still containing their data. You can just reattach them after restoring the backup.

Again, the Bus+ID combination is used to detect if a disk is "the same": If scsi0 is in the backup, it will overwrite the scsi0 disk of the VM upon restore. If scsi1 is excluded, it won't end up in the backup. If scsi1 is not in the backup, it won't overwrite the scsi1 of the VM upon restore.

For containers, the situation is different, because the backup is done as a single filesystem, so if a mount point is excluded you will lose it upon restore. There is a warning for this.

EDIT: clarify a bit more.

Not true, I tried now - got the message:

Code:
CT 103 - Restore. This will permanently erase current CT data.
Mount point volumes are also erased.

And my mount point mp0 has not data no more ...
 
Not true, I tried now - got the message:

Code:
CT 103 - Restore. This will permanently erase current CT data.
Mount point volumes are also erased.

And my mount point mp0 has not data no more ...

Hm? This is exactly what @fiona said and you even quoted:
For containers, the situation is different, because the backup is done as a single filesystem, so if a mount point is excluded you will lose it upon restore. There is a warning for this.

For containers, there is a big warning already:
CT 121 - Restore. This will permanently erase current CT data. Mount point volumes are also erased.

Container = CT = LXC is not the same as a virtual machine = VM! ;)
 
Sorry - up to now VM was all the same for me.

So should we raise a feature request? Because for me as "user" this behaviour makes no sense ...

Because technically, the reason why I created a mount-point was because I wanted to backup the container and not touch the data.
 
Last edited:
Same situation as several others have mentioned. I didn't mind waiting for a backup to complete, especially considering the peace of mind gained by knowing my data drives were also safe, but I'm now 6 hours into a restore and have calculated about 40 hours remaining. All of this to simply recover a 32GB boot drive. As a "non-subscriber" I certainly don't have the right to complain, but I have a hard time believing anybody would opt for an Enterprise subscription without this basic functionality.

FWIW....

scsi0: 32 GB (Needed to be Restored)
scsi1: 300 GB (Didn't need restored)
scsi2: 1.95 TB (Didn't need restored)

For those that haven't figured it out --- it it is a good practice to edit the drives in the hardware configuration and un-check "backup" (under advanced) for all drives except the boot drive. Then perform a backup and in the notes you can add "boot drive only". This will allow you to quickly restore the boot drive and a config. Then edit the config located in /etc/pve/qemu-server to add the data drives.

To play it safe, I recommend restoring to a different VMID than the original. The new config will already point to the newly restored boot drive and you can add entries for the already existing data drive(s).

If you want everything just the way it was, you can re-name ( MV <old-name> <new-name> ) the new config file which will also update it in the GUI.

If you want to rename any drives and they are on a ZFS pool you can use the following:

zfs rename <old> <new> where

old = pool-name/path if directory/vm-old-vmid-number-disk-X
new = pool-name/path if directory/vm-new-vmid-number-disk-X

If you want to move any disks to different storage you can do so once the configuration file is finalized by using the GUI as follows:

Make sure the VM is off and Highlight the hard drive under the hardware tab and select Disk Action >>> Move Storage >>> select new location
 
hello is this coming soon ?

Not been able to select Disk from a restore is really a lack of a basic feature... we are near 2024 and this is a must for large VM where there is multiple Large Disk
Same to me. Waiting over 2 years for it now....
 
Hi,
hello is this coming soon ?

Not been able to select Disk from a restore is really a lack of a basic feature... we are near 2024 and this is a must for large VM where there is multiple Large Disk
unfortunately, I cannot give any time estimate. I'm still working on other things at the moment and nobody else has picked up my patches either. That said, you can already do this via CLI:

For PBS, the disk images are separate in the archive, so you can just restore the one you are interested in with the usual proxmox-backup-client restore command.

For .vma backups, you could always do it via the -r option, and since pve-qemu-kvm >= 8.0.2-6 there is a much more user-friendly way with the -d option, e.g. vma extract vzdump-qemu-113-2023_09_08-11_23_15.vma target-dir -d "drive-scsi1" will only extract the scsi1 image.
 
  • Like
Reactions: shurik
Hi,

unfortunately, I cannot give any time estimate. I'm still working on other things at the moment and nobody else has picked up my patches either. That said, you can already do this via CLI:

For PBS, the disk images are separate in the archive, so you can just restore the one you are interested in with the usual proxmox-backup-client restore command.

For .vma backups, you could always do it via the -r option, and since pve-qemu-kvm >= 8.0.2-6 there is a much more user-friendly way with the -d option, e.g. vma extract vzdump-qemu-113-2023_09_08-11_23_15.vma target-dir -d "drive-scsi1" will only extract the scsi1 image.
Of course and thank you a lot. The request is more pointed to CLI-free restore of images/disks, thats the point. If you have a broken server and should restore ASAP some data and you DONT restore it every day with cli commands, its just waste of time, you could use to already push the restore of a disk. I hope, you can understand, what I mean.
 
  • Like
Reactions: DC-CA1
I know this is an older thread by now but damn - this situation came up for me today and it looks like I'll be waiting 48 hours to restore a 70GB boot drive because I must also restore the 2.5TB of related disks along with it... not cool!
 
I know this is an older thread by now but damn - this situation came up for me today and it looks like I'll be waiting 48 hours to restore a 70GB boot drive because I must also restore the 2.5TB of related disks along with it... not cool!
They have time to develope vmware import tool, but no time to dev. hyper-v import or simple single disk restore option in GUI.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!