Move EFI disk fails on between (RBD storage and RBD storage) or (RBD storage and lvm-thin) while VM is running

Roy Compass

New Member
Dec 7, 2020
3
1
1
42
Hi,

I am having issues moving EFI disks stored on RBD storage to another RBD storage. Move fails with mirroring error, when the VM is running. Same error moving EFI disks from RBD storage to lvm-thin.

create full clone of drive efidisk0 (rbd-r-host-hdd-512-32-01:vm-105-disk-1)
drive mirror is starting for drive-efidisk0
drive-efidisk0: Cancelling block job
drive-efidisk0: Done.
Removing image: 100% complete...done.
TASK ERROR: storage migration failed: mirroring error: drive-efidisk0: mirroring has been cancelled

If the VM is powered off, move from RBD storage to RBD storage completes with no error. But I need re-select and save the OVMF/UEFI Boot entries.

create full clone of drive efidisk0 (rbd-r-host-hdd-512-32-01:vm-105-disk-1)
Removing image: 100% complete...done.
TASK OK

Move from RBD storage to directory storage with RAW disk type is working, while VM is running.

create full clone of drive efidisk0 (rbd-r-host-hdd-512-32-01:vm-105-disk-1)
Formatting '/mnt/pve/backup-kvm2/images/105/vm-105-disk-0.raw', fmt=raw size=131072
drive mirror is starting for drive-efidisk0
drive-efidisk0: transferred: 131072 bytes remaining: 0 bytes total: 131072 bytes progression: 100.00 % busy: 1 ready: 0
drive-efidisk0: transferred: 131072 bytes remaining: 0 bytes total: 131072 bytes progression: 100.00 % busy: 0 ready: 1
all mirroring jobs are ready
drive-efidisk0: Completing block job...
drive-efidisk0: Completed successfully.
drive-efidisk0 : finished
Removing image: 100% complete...done.
TASK OK
root@kvm-compute-02:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1


Thanks in advance
 
  • Like
Reactions: flames
Hi,
thanks for reporting. I also noticed the issues around the same time and created a bug report for it. For the powered-off move there is a fix in the qemu-server 6.3-3 package, but that's currently only available in the pvetest repository.

I didn't see your post back then (currently going through unanswered threads). Do you remember the online move working in the past?
 
  • Like
Reactions: flames
Hi Fabian,

Thanks for your reply.

I have just started to use to use dedicated EFI disks. Don't know if online moving was working before.

Thanks
Roy
 
+1 to this issue!
(Running VM cloning on ZFS storage: TASK ERROR: clone failed: block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled)

proxmox-ve: 6.4-1 (running kernel: 5.11.21-1-pve) pve-manager: 6.4-9 (running version: 6.4-9/5f5c0e3f) pve-kernel-5.11: 7.0-2~bpo10 pve-kernel-5.4: 6.4-3 pve-kernel-helper: 6.4-3 pve-kernel-libc-dev: 5.11.21-1~bpo10 pve-kernel-5.11.21-1-pve: 5.11.21-1~bpo10 pve-kernel-5.11.17-1-pve: 5.11.17-1~bpo10 pve-kernel-5.4.119-1-pve: 5.4.119-1 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.1.2-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: not correctly installed ifupdown2: 3.0.0-1+pve3 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.20-pve1 libproxmox-acme-perl: 1.1.0 libproxmox-backup-qemu0: 1.0.3-1 libpve-access-control: 6.4-3 libpve-apiclient-perl: 3.1-3 libpve-common-perl: 6.4-3 libpve-guest-common-perl: 3.1-5 libpve-http-server-perl: 3.2-3 libpve-storage-perl: 6.4-1 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.6-2 lxcfs: 4.0.6-pve1 novnc-pve: 1.1.0-1 proxmox-backup-client: 1.1.10-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.5-6 pve-cluster: 6.4-1 pve-container: 3.3-5 pve-docs: 6.4-2 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-4 pve-firmware: 3.2-4 pve-ha-manager: 3.1-1 pve-i18n: 2.3-1 pve-qemu-kvm: 5.2.0-6 pve-xtermjs: 4.7.0-3 qemu-server: 6.4-2 smartmontools: 7.2-pve2 spiceterm: 3.1-1 vncterm: 1.6-2 zfsutils-linux: 2.0.4-pve1
 
Last edited:
+1 observing on PVE7

if there is an EFI block device, cloning works only on halted machines.
 
Last edited:
Same here, on 6.4 and 7.0 moving an EFI disk while the VM is running is not possible.
When the VM is offline, there were no problems in moving them.
I'm using ZFS pools as storages in that case.
 
I have same problem on pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-4-pve). I have two nodes in cluster. Each node have ext4 FS mounted in "/Proxmox/cfs-ext4" on their SSD. When i try live migration VM that have EFI disk:
Code:
2021-09-22 02:00:55 starting migration of VM 102 to node 'pve-01' (192.168.24.20)
2021-09-22 02:00:55 found local disk 'cfs-ext4:102/vm-102-disk-0.qcow2' (in current VM config)
2021-09-22 02:00:55 found local disk 'cfs-ext4:102/vm-102-disk-1.qcow2' (in current VM config)
2021-09-22 02:00:55 found local disk 'local-lvm:vm-102-disk-0' (via storage)
2021-09-22 02:00:55 copying local disk images
2021-09-22 02:00:57 Formatting '/Proxmox/cfs-ext4/images/102/vm-102-disk-0.raw', fmt=raw size=4194304
2021-09-22 02:00:57 successfully imported 'cfs-ext4:102/vm-102-disk-0.raw'
2021-09-22 02:00:57 volume 'local-lvm:vm-102-disk-0' is 'cfs-ext4:102/vm-102-disk-0.raw' on the target
2021-09-22 02:00:57 starting VM 102 on remote node 'pve-01'
2021-09-22 02:01:01 volume 'cfs-ext4:102/vm-102-disk-1.qcow2' is 'cfs-ext4:102/vm-102-disk-1.qcow2' on the target
2021-09-22 02:01:01 volume 'cfs-ext4:102/vm-102-disk-0.qcow2' is 'cfs-ext4:102/vm-102-disk-2.qcow2' on the target
2021-09-22 02:01:01 start remote tunnel
2021-09-22 02:01:02 ssh tunnel ver 1
2021-09-22 02:01:02 starting storage migration
2021-09-22 02:01:02 efidisk0: start migration to nbd:unix:/run/qemu-server/102_nbd.migrate:exportname=drive-efidisk0
drive mirror is starting for drive-efidisk0
drive-efidisk0: Cancelling block job
drive-efidisk0: Done.
2021-09-22 02:01:02 ERROR: online migrate failure - block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled
2021-09-22 02:01:02 aborting phase 2 - cleanup resources
2021-09-22 02:01:02 migrate_cancel
2021-09-22 02:01:08 ERROR: migration finished with problems (duration 00:00:13)
TASK ERROR: migration problems
Offline migration work without problem.
 
Last edited:
Same error still exists on Proxmox 7.1 (ERROR: online migrate failure - block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled)
 
Last edited:
7 months later and still no solution, makes me wonder why I pay for support.
 
We are evaluating Proxmox as alternative for VMWare as we speak. But moving storage from one rbd storage to another fails when moving an efi-disk. This is probably a showstopper for us. When is this problem going to be resolved ?
 
  • Like
Reactions: Brandito
Well Marcel, you should be able to move your efi-disks disks images while your VMs are offline for the time being.
On a personal level, having switched to Proxmox even with that issue was seriously one of the very best thing I've done lately...
 
is this a pve bug or qemu/kvm? still some important feature that is not working specially as ceph is now a main storage feature in proxmox...
 
I have the same error while moving efi disk from nfs to rbd ceph on proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)

Code:
create full clone of drive efidisk0 (xxx:171/vm-171-disk-1.qcow2)
drive mirror is starting for drive-efidisk0 with bandwidth limit: 716800 KB/s
drive-efidisk0: Cancelling block job
drive-efidisk0: Done.
Removing image: 100% complete...done.
TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled

EDIT: same thing on normal disk connected to efi vm
 
Last edited:
I just hit this error moving an EFI disk belonging to my home assistant qemu VM from local-lvm to a nvme ssd (that is a local-lvm-thin). Once I halted the VM I was able to move it. Running 7.2-3, everything up to date (I Just installed it last night).

Here is the VM config:
Code:
# cat 112.conf
agent: 1
balloon: 0
bios: ovmf
bootdisk: scsi0
cores: 2
efidisk0: local-lvm:vm-112-disk-0,format=raw,size=128K     <---- disk to be moved
ide2: none,media=cdrom
machine: q35
memory: 2048
name: homeassistant
net0: virtio=FE:B0:BB:A8:AB:CD,bridge=vmbr0,firewall=1
net1: virtio=0A:0F:BF:5B:CD:AB,bridge=vmbr0,firewall=1,tag=222
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm-thin-1tb-nvme:vm-112-disk-0,discard=on,size=30G
scsihw: virtio-scsi-pci
smbios1: uuid=a6237444-e9e7-4e61-beeb-6c000f6c7a17
sockets: 1
usb0: host=3-3,usb3=1
vga: vmware
vmgenid: 0c885f27-6620-4756-9ecc-2d8e4430fe63

Here's what I get when I move it in the GUI:

Code:
()
create full clone of drive efidisk0 (local-lvm:vm-112-disk-0)
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "vm-112-disk-1" created.
drive mirror is starting for drive-efidisk0
drive-efidisk0: Cancelling block job
drive-efidisk0: Done.
  Logical volume "vm-112-disk-1" successfully removed
TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled
 
Last edited:
Hello,

Same issue for me, between NFS -> local-zfs on running VM

proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)

create full clone of drive efidisk0 (Datastore2:201/vm-201-disk-0.qcow2)
drive mirror is starting for drive-efidisk0
drive-efidisk0: Cancelling block job
drive-efidisk0: Done.
TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!