TL
R;
I think I can summarise the below as...
Restoring a backup of a VM with both SeaBIOS + EFI storage (unnecessary as it may be) will cause the restored VM to reference an invalid/missing EFI device and fail to startup
----
So, I rebuilt my PVE9 host from scratch yesterday.. the old host (on a potentially suspect SSD) was running LVM storage. I took the opportunity to build the new host on a new drive with ZFS (single drive)
I restored all the backups I had (PBS) to the new host, and that all went rather well.
Later, I realised I had one VM that hadn't been backed up.. so I did the following:
Again, that all went well.. until I tried to start the newly restored VM...
I guess, it's somehow expecting the storage for this VM to reside in LVM rather than ZFS..
What's curious is that this didn't happen with any of the original restores.. except for one potentially key difference.. and that is the old PVE is using slightly older packages that will have been used for the backup, versus slightly newer packages on the new PVE for the restore..
The difference being:
Perhaps this was a bad idea (and here I thought I was being clever)
Looking at the VM config, I can see the following disks defined
While the configuration displayed by PBS (for the backup being restored) shows:
So it seems the scsi0 device is being correctly remapped to the new storage, while the EFI device is not
I did attempt to edit the VM config to point it at data (zfs) instead of local-lvm but it clearly does not live there (startup attempt stalled entirely)
Here is where it gets more interesting
I figured it's just EFI right? I'll just remove it and create a new EFI ..
That's very interesting.. The warning is entirely valid. Taking the hint, I'll proceed without an EFI and the VM now starts up without the missing EFI device attached
So, after all that.. I had entirely missed the fact that the EFI storage isn't even used.
At a guess, the restore operation decided it was unnecessary to create one, nor to amend the configuration accordingly.
This leaves me with one remaining issue.. I have an "unused disk" on supposedly local-lvm
While local-lvm does technically exist, it's disabled and has no such disk
I think its safe for me to simply remove this from the config file manually and pretend it never happened
I think I can summarise the below as...
Restoring a backup of a VM with both SeaBIOS + EFI storage (unnecessary as it may be) will cause the restored VM to reference an invalid/missing EFI device and fail to startup
----
So, I rebuilt my PVE9 host from scratch yesterday.. the old host (on a potentially suspect SSD) was running LVM storage. I took the opportunity to build the new host on a new drive with ZFS (single drive)
I restored all the backups I had (PBS) to the new host, and that all went rather well.
Later, I realised I had one VM that hadn't been backed up.. so I did the following:
- Shutdown the new PVE
- Swap in the old drive, and boot up the old PVE
- Remove one storage volume that isn't required (probably not relevant)
- Back up the VM to PBS
- Shutdown the old PVE, swap the drives again, boot up the new PVE
- Restore said backup of the missing VM to the new PVE
Again, that all went well.. until I tried to start the newly restored VM...
Code:
TASK ERROR: storage 'local-lvm' is disabled
I guess, it's somehow expecting the storage for this VM to reside in LVM rather than ZFS..
What's curious is that this didn't happen with any of the original restores.. except for one potentially key difference.. and that is the old PVE is using slightly older packages that will have been used for the backup, versus slightly newer packages on the new PVE for the restore..
The difference being:
Code:
pve-manager: 9.0.4 --> 9.0.5
proxmox-backup-client: 4.0.12-1 --> 4.0.14-1
proxmox-backup-file-restore: 4.0.12-1 --> 4.0.14-1
qemu-server: 9.0.16 --> 9.0.17
Perhaps this was a bad idea (and here I thought I was being clever)
Looking at the VM config, I can see the following disks defined
Code:
efidisk0: local-lvm:vm-131-disk-2,efitype=4m,pre-enrolled-keys=1,size=4M
scsi0: data:vm-131-disk-0,iothread=1,size=32G
While the configuration displayed by PBS (for the backup being restored) shows:
Code:
efidisk0: local-lvm:vm-131-disk-2,efitype=4m,pre-enrolled-keys=1,size=4M
scsi0: local-lvm:vm-131-disk-1,iothread=1,size=32G
So it seems the scsi0 device is being correctly remapped to the new storage, while the EFI device is not
I did attempt to edit the VM config to point it at data (zfs) instead of local-lvm but it clearly does not live there (startup attempt stalled entirely)
Here is where it gets more interesting
I figured it's just EFI right? I'll just remove it and create a new EFI ..
- Detaching the EFI storage: OK
- Removing/Destroying the storage: failed to update VM 131: storage 'local-lvm' is disabled (500)
- Creating new EFI storage: Warning: The VM currently does not uses 'OVMF (UEFI)' as BIOS.
That's very interesting.. The warning is entirely valid. Taking the hint, I'll proceed without an EFI and the VM now starts up without the missing EFI device attached
So, after all that.. I had entirely missed the fact that the EFI storage isn't even used.
At a guess, the restore operation decided it was unnecessary to create one, nor to amend the configuration accordingly.
This leaves me with one remaining issue.. I have an "unused disk" on supposedly local-lvm
While local-lvm does technically exist, it's disabled and has no such disk
I think its safe for me to simply remove this from the config file manually and pretend it never happened
Last edited: