Unable to restore a backup with an EFI partition

marci4

New Member
Feb 29, 2024
2
0
1
Hello everyone,
I am currently having the problem that I cannot restore a VM (both Linux and Windows) with an EFI partition from a backup.
The storage for the disks is an LVM thin and the backup is stored on a server using NFS.

Once I want to restore the backup from the server, I run into the following error:
Code:
restore vma archive: zstd -q -d -c /mnt/NAS/Proxmox/Backup/dump/vzdump-qemu-123-2024_02_29-13_46_15.vma.zst | vma extract -v -r /var/tmp/vzdumptmp622210.fifo - /var/tmp/vzdumptmp622210
CFG: size: 581 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 34359738368 devname: drive-scsi0
CTIME: Thu Feb 29 13:46:16 2024
  Rounding up size to full physical extent 32.00 MiB
  Logical volume "vm-124-disk-0" created.
new volume ID is 'vm_ssd:vm-124-disk-0'
  Logical volume "vm-124-disk-1" created.
new volume ID is 'vm_ssd:vm-124-disk-1'
map 'drive-efidisk0' to '/dev/vg_ssd/vm-124-disk-0' (write zeros = 0)
map 'drive-scsi0' to '/dev/vg_ssd/vm-124-disk-1' (write zeros = 0)
vma: vma_reader_register_bs for stream drive-efidisk0 failed - unexpected size 33554432 != 540672
/bin/bash: line 1: 622219 Broken pipe             zstd -q -d -c /mnt/NAS/Proxmox/Backup/dump/vzdump-qemu-123-2024_02_29-13_46_15.vma.zst
     622220 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp622210.fifo - /var/tmp/vzdumptmp622210
  Logical volume "vm-124-disk-0" successfully removed.
temporary volume 'vm_ssd:vm-124-disk-0' sucessfuly removed
  Logical volume "vm-124-disk-1" successfully removed.
temporary volume 'vm_ssd:vm-124-disk-1' sucessfuly removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 124 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/NAS/Proxmox/Backup/dump/vzdump-qemu-123-2024_02_29-13_46_15.vma.zst | vma extract -v -r /var/tmp/vzdumptmp622210.fifo - /var/tmp/vzdumptmp622210' failed: exit code 133

Code:
root@pve:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2


Backup configuration reported in the WebUi

Code:
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: x86-64-v2-AES
efidisk0: vm_ssd:vm-123-disk-0,efitype=4m,pre-enrolled-keys=1,size=32M
memory: 4096
meta: creation-qemu=8.1.5,ctime=1709207642
name: VMTest
net0: virtio=BC:24:11:92:4D:57,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: vm_ssd:vm-123-disk-1,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=2ad860a4-9f7b-490c-a132-743170b958a3
sockets: 1
vmgenid: e175e876-302a-4f0c-a0ea-45dcdd1b5f35
#qmdump#map:efidisk0:drive-efidisk0:vm_ssd:raw:
#qmdump#map:scsi0:drive-scsi0:vm_ssd:raw:

The config of the VM is:
Code:
root@pve:cat 123.conf
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: x86-64-v2-AES
efidisk0: vm_ssd:vm-123-disk-0,efitype=4m,pre-enrolled-keys=1,size=32M
memory: 4096
meta: creation-qemu=8.1.5,ctime=1709207642
name: VMTest
net0: virtio=BC:24:11:92:4D:57,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: vm_ssd:vm-123-disk-1,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=2ad860a4-9f7b-490c-a132-743170b958a3
sockets: 1
vmgenid: e175e876-302a-4f0c-a0ea-45dcdd1b5f35


Does anyone have an idea what I can try to do?
Is this a known issue or am I running just into a corner case?

Thank you very much for your help.

Best regards,
Marcel
 
Last edited:
I found a workaround for this problem.

The trick is to NOT restore the backup on a LVM thin storage.
I just restored the VM on a storage with type disk and moved it over to the LVM thin storage.
This worked for both linux and windows vms.

Hope this helps someone.

Best regards,
Marcel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!