[SOLVED] Help to restore a vm with 3 different hard disks

danman

Member
Jun 5, 2021
35
1
13
39
Hey

I joined a cluster and things went wrong during this process. I'm not quite sure what happened, I guess a mix up with IPs when a new device joined the network.
The result was that all the vms were gone and other strange things happened.
It doesn't really matter though, as I was planning on reinstalling pve anyway. So it ended up helping me with that decision.

I now have a vm backup via PBS. This VM runs with 3 different storages:
Code:
#qmdump#map:efidisk0:drive-efidisk0:wolfs:raw:
#qmdump#map:scsi0:drive-scsi0:local-lvm:raw:
#qmdump#map:scsi1:drive-scsi1:wolfs-4tb:raw:

wolfs and wolfs-4tb are 2 zfs pools. Both are imported again and still have the "old" vm data. local-lvm is now local-zfs.
How can I use the old vm-disks on the wolfs storages? These ones are quite big - 1.5TB...

Thanks
 
Hi,
during restore you can select to restore using the backed up storage config or select a target storage to restore the disks to. In your case you probably want to restore to the large storage and then move the disks to the smaller storages after the restore.
 
Hm, can't choose config file, only storage. There is not enough space to restore. So basically I have to delete the "old" storage and restore everything again?
I have another vm, similar situation/size. I was hoping to avoid that :D
Is it maybe possible to delete the big storage part in the config file (PBS) restore the smaller storage and add the bigger "old" one again?
 
Last edited:
Content of the VM's .conf is displayed with button "Show configuration" in "Backups" section of your backup Storage.
Copy/Paste to a /etc/pve/qemu-server/vmid.conf then VM will appear.
Then adjust/attach storage.
 
That would work if hadn't changed to local-zfs instead of local-lvm.
I actually need to restore local-lvm to local-zfs and attach the "big" storage.
 
Hm, can't choose config file, only storage. There is not enough space to restore. So basically I have to delete the "old" storage and restore everything again?
I have another vm, similar situation/size. I was hoping to avoid that :D
Is it maybe possible to delete the big storage part in the config file (PBS) restore the smaller storage and add the bigger "old" one again?
Ah I see, so most of your disks are still there. Then the best approach is probably to restore just that one single disk, which you can do via the cli (e.g. here https://forum.proxmox.com/threads/restore-individual-disk-from-pbs.115024/) and restore the VM config, which you can easily get from the WebUI by selecting the Backup in the PBS storage and clicking on Show Configuration, placing the config under /etc/pve/qemu-server/<VMID>.conf

Depending on how/where you restored the VM disk, you will than have to adapt the config accordingly.
 
Ah, this looks promising! Thanks!

Just to double check it before I run the commands. I have a vm with the following config:

Code:
efidisk0: wolfs:vm-107-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
scsi0: local-lvm:vm-107-disk-0,iothread=1,size=10G
scsi1: wolfs-4tb:vm-107-disk-0,size=1524G

I would like to have the following 2 disks
Code:
efidisk0: wolfs:vm-107-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
scsi0: local-lvm:vm-107-disk-0,iothread=1,size=10G
restored in local-zfs and attach the old one after that.

I'm not sure with the right path. /dev/pve/ isn't there and /dev/rpool/data/... seems to be wrong ...
Code:
pbs-restore --repository root@pam@127.0.0.1:pbs-datastore vm/XXX/2023-09-11T17:30:04Z drive-efidisk0.img.fidx /dev/rpool/data/vm-XXX-disk-0 --verbose --format raw --skip-zero
pbs-restore --repository root@pam@127.0.0.1:pbs-datastore vm/XXX/2023-09-11T17:30:04Z drive-scsi0.img.fidx /dev/rpool/data/vm-XXX-disk-1 --verbose --format raw --skip-zero

Edit:
I found the solution during other restores /dev/zvol/rpool/data/vm-101-disk-0. As soon as the current restores are done, I will test this and let you know and change it to solved, if it is solved ;)
 
Last edited:
Hmm, now the next issue ...

What I did so far:

1. Add /etc/pve/qemu-server/vmid.conf as @_gabriel mentioned.
2. Detach/remove EFI and local-lvm disks
3. Create first EFI and local-zfs disks with the same size
4.
Code:
pbs-restore --repository root@pam@127.0.0.1:pbs-datastore vm/XXX/2023-09-11T17:30:04Z drive-efidisk0.img.fidx /dev/zvol/rpool/data/vm-101-disk-0 --verbose --format raw --skip-zero
pbs-restore --repository root@pam@127.0.0.1:pbs-datastore vm/XXX/2023-09-11T17:30:04Z drive-scsi0.img.fidx /dev/zvol/rpool/data/vm-101-disk-1 --verbose --format raw --skip-zero

That is the failure:

efi_issue.png

When I restore other vms in "full" there is no issues with efi.

Edit (SOLUTION):
OK! Now it works! What I did was to disable and enable again backup for old storage. I think there was no proper link to it or I don't know.

So that it is the solution at least for me:

1. Add /etc/pve/qemu-server/vmid.conf as @_gabriel mentioned.
2. Detach/remove EFI and local-lvm disks
3. Create EFI and local-zfs disks with the same size
4.
Code:
pbs-restore --repository root@pam@127.0.0.1:pbs-datastore vm/XXX/2023-09-11T17:30:04Z drive-efidisk0.img.fidx /dev/zvol/rpool/data/vm-101-disk-0 --verbose --format raw --skip-zero
pbs-restore --repository root@pam@127.0.0.1:pbs-datastore vm/XXX/2023-09-11T17:30:04Z drive-scsi0.img.fidx /dev/zvol/rpool/data/vm-101-disk-1 --verbose --format raw --skip-zero
5. Disable and enable again backup or anything else, for old storage
6. Start the vm.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!