Remote migration of cold VM - EERROR: no export formats for 'VM-pool:vm-9999-disk-0' - check storage plugin support!

Feb 27, 2020
4
1
23
While doing some testing with qm remote-migrate, I got this error message while trying to migrate a cold VM. However migrating it live works OK:

Code:
qm remote-migrate 9999 9999 apitoken='Authorization: PVEAPIToken=root@pam!YY=XX-XX-XX-XX-XX,fingerprint=XX:XX...:XX',host=X.X.X.X --target-bridge vmbr0 --target-storage VM-pool --delete
-> ERROR: no export formats for 'VM-pool:vm-9999-disk-0' - check storage plugin support!

qm remote-migrate 9999 9999 apitoken='Authorization: PVEAPIToken=root@pam!YY=XX-XX-XX-XX-XX,fingerprint=XX:XX...:XX',host=X.X.X.X --target-bridge vmbr0 --target-storage VM-pool --online --delete
-> Works OK, and VM is migrated live

Any ideas??

Cheers,
Peter
 
To be precise, remote offline migration is supported, but only for certain storage combinations: e.g. ZFS -> ZFS works, RBD doesn't (our RBD plugin currently lacks offline export/import functionality).
 
  • Like
Reactions: __Peter__
To be precise, remote offline migration is supported, but only for certain storage combinations: e.g. ZFS -> ZFS works, RBD doesn't (our RBD plugin currently lacks offline export/import functionality).
Is RBD still not supported yet?
 
Hi,
Is RBD online supportet? I get the same Error, when I try to migrate from Ceph to Ceph.
yes, (remote) live-migration should work no matter what storage type you are using. Remote offline-migration from RBD to RBD is still not implemented, unfortunately.
 
  • Like
Reactions: SERvus
Is there a different mechanism at work? Offline should be more easier than online, shouldn't it?
Yes, live-migration uses QEMU's NBD export + disk mirroring. Remote migration is still in-development and RBD is shared within a cluster, so there simply was no need to have an export/import mechanism for offline migration. That's the reason it was not already implemented.
 
I think RBD to RBD is supported now as I managed to do it just now for 1 VM.
both the clusters have CEPH running and the VM on RBD successfully migrated to a new proxmox cluster on CEPH again.
wonderful

but another VM with TPM state did not work - any ideas for this
 
I think RBD to RBD is supported now as I managed to do it just now for 1 VM.
With online migration, it works. With offline migration, it is not implemented yet.
both the clusters have CEPH running and the VM on RBD successfully migrated to a new proxmox cluster on CEPH again.
wonderful

but another VM with TPM state did not work - any ideas for this
Please share the full migration log and the VM configuration qm config <ID>
 
qm config 9997
-----------------------
agent: 1
balloon: 0
bios: seabios
boot: order=scsi0;ide0;net0
cores: 4
cpu: host
description: Server 2022
ide0: none,media=cdrom
machine: pc-q35-8.1
memory: 16384
meta: creation-qemu=8.1.2,ctime=1705037235
name: Win2022-Office
net0: virtio=BC:24:11:A3:F3:D1,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: Storage:vm-9997-disk-3,cache=writeback,discard=on,size=70G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=35245ac8-6179-4057-a50f-e461d1dc1a4a
sockets: 1
template: 0
tpmstate0: Storage:vm-9997-disk-2,size=4M,version=v2.0
vmgenid: 8c948c52-a326-48b2-9b3b-be57e6e7e882

------------------------------------------------------------------
command given in shell (this works perfectly for machines without TPM)
---------------------------------------------------------------------------------------------------------
qm remote-migrate 9997 9997 apitoken='PVEAPIToken=root@pam!roottoken=a75a33bf-47g2-2a24-9437-f7a81bb21e59,host=172.16.0.12,fingerprint=32:42:A5:E0:A1:14:86:AC:65:A3:1B:DA:CC:93:1D:D6:3E:13:64:33:64:BE:10:BC:11:74:35:58:C8:1A:87:EB' --online --target-bridge vmbr0 --target-storage RBDNVME --delete 1
---------------------------------------------------------------------------------------------------------

Establishing API connection with remote at '172.16.0.12'
2024-06-26 12:29:32 remote: started tunnel worker 'UPID:ww1r1u12:00360062:02EDA200:667BBC54:qmtunnel:9997:root@pam!roottoken:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2024-06-26 12:29:32 local WS tunnel version: 2
2024-06-26 12:29:32 remote WS tunnel version: 2
2024-06-26 12:29:32 minimum required WS tunnel version: 2
websocket tunnel started
2024-06-26 12:29:32 starting migration of VM 9997 to node 'w12' (172.16.0.12)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2024-06-26 12:29:32 found generated disk 'Storage:vm-9997-disk-2' (in current VM config)
2024-06-26 12:29:32 found local disk 'Storage:vm-9997-disk-3' (attached)
2024-06-26 12:29:32 copying local disk images
2024-06-26 12:29:32 ERROR: no export formats for 'Storage:vm-9997-disk-2' - check storage plugin support!
2024-06-26 12:29:32 aborting phase 1 - cleanup resources
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2024-06-26 12:29:34 ERROR: migration aborted (duration 00:00:02): no export formats for 'Storage:vm-9997-disk-2' - check storage plugin support!
migration aborted

--------------------------------------------------------------------------------------------------------

log is below
------------------

2024-06-26 12:29:32 remote: started tunnel worker 'UPID:ww1r1u12:00360062:02EDA200:667BBC54:qmtunnel:9997:root@pam!roottoken:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2024-06-26 12:29:32 local WS tunnel version: 2
2024-06-26 12:29:32 remote WS tunnel version: 2
2024-06-26 12:29:32 minimum required WS tunnel version: 2
websocket tunnel started
2024-06-26 12:29:32 starting migration of VM 9997 to node 'w12' (172.16.0.12)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2024-06-26 12:29:32 found generated disk 'Storage:vm-9997-disk-2' (in current VM config)
2024-06-26 12:29:32 found local disk 'Storage:vm-9997-disk-3' (attached)
2024-06-26 12:29:32 copying local disk images
2024-06-26 12:29:32 ERROR: no export formats for 'Storage:vm-9997-disk-2' - check storage plugin support!
2024-06-26 12:29:32 aborting phase 1 - cleanup resources
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2024-06-26 12:29:34 ERROR: migration aborted (duration 00:00:02): no export formats for 'Storage:vm-9997-disk-2' - check storage plugin support!

TASK ERROR: migration aborted
 
I see. The TPM state drive is handled differently, because it is not directly managed by QEMU (but by the swtpm service) and therefore it is not live-migrated using QEMU's drive-mirror functionality, but via the storage plugin. That issue will be resolved once the RBD plugin supports offline export/import.
 
My two cents:
  • Running 2 Clusters on 7.4-18, identical hardware, storage ceph on both;
  • online migration works (as far as tested), offline migration gives "no export formats for 'Storage".
  • we are doing offline migration using a pipe: vzdump "$vmid" --compress 0 --mode stop --stdout |ssh "root@$targethost" -o StrictHostKeyChecking=no "qmrestore --storage rzcluster - $vmid"
  • Using proxmoxer python3 module, the same call to remote-migrate online that succeeds using pvesh fails:
  • ERROR: online migrate failure - failed to write forwarding command - Insecure dependency in unlink while running with -T switch at /usr/share/perl5/PVE/Tunnel.pm line 214, <GEN311> line 5.
Code:
2024-08-09 04:41:04 remote: started tunnel worker 'UPID:pve-4-1-rz:xxx:qmtunnel:40516:root@pam!FULL_RIGHTS:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2024-08-09 04:41:04 local WS tunnel version: 2
2024-08-09 04:41:04 remote WS tunnel version: 2
2024-08-09 04:41:04 minimum required WS tunnel version: 2
websocket tunnel started
2024-08-09 04:41:04 starting migration of VM 40516 to node 'pve-4-1-rz' (10.11.0.41)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2024-08-09 04:41:04 found local disk 'rzcluster:vm-40516-disk-0' (in current VM config)
2024-08-09 04:41:04 mapped: net0 from vmbr1 to vmbr1
2024-08-09 04:41:04 Allocating volume for drive 'scsi0' on remote storage 'rzcluster'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2024-08-09 04:41:04 volume 'rzcluster:vm-40516-disk-0' is 'rzcluster:vm-40516-disk-0' on the target
tunnel: -> sending command "config" to remote
tunnel: <- got reply
tunnel: -> sending command "start" to remote
tunnel: <- got reply
2024-08-09 04:41:05 Setting up tunnel for '/run/qemu-server/40516.migrate'
2024-08-09 04:41:05 ERROR: online migrate failure - failed to write forwarding command - Insecure dependency in unlink while running with -T switch at /usr/share/perl5/PVE/Tunnel.pm line 214, <GEN311> line 5.
2024-08-09 04:41:05 aborting phase 2 - cleanup resources
2024-08-09 04:41:05 migrate_cancel
tunnel: -> sending command "stop" to remote
tunnel: <- got reply
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2024-08-09 04:41:07 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems
 
Last edited:
Hi,
My two cents:
  • Running 2 Clusters on 7.4-18, identical hardware, storage ceph on both;
  • online migration works (as far as tested), offline migration gives "no export formats for 'Storage".
  • we are doing offline migration using a pipe: vzdump "$vmid" --compress 0 --mode stop --stdout |ssh "root@$targethost" -o StrictHostKeyChecking=no "qmrestore --storage rzcluster - $vmid"
  • Using proxmoxer python3 module, the same call to remote-migrate online that succeeds using pvesh fails:
  • ERROR: online migrate failure - failed to write forwarding command - Insecure dependency in unlink while running with -T switch at /usr/share/perl5/PVE/Tunnel.pm line 214, <GEN311> line 5.
Code:
2024-08-09 04:41:04 remote: started tunnel worker 'UPID:pve-4-1-rz:xxx:qmtunnel:40516:root@pam!FULL_RIGHTS:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2024-08-09 04:41:04 local WS tunnel version: 2
2024-08-09 04:41:04 remote WS tunnel version: 2
2024-08-09 04:41:04 minimum required WS tunnel version: 2
websocket tunnel started
2024-08-09 04:41:04 starting migration of VM 40516 to node 'pve-4-1-rz' (10.11.0.41)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2024-08-09 04:41:04 found local disk 'rzcluster:vm-40516-disk-0' (in current VM config)
2024-08-09 04:41:04 mapped: net0 from vmbr1 to vmbr1
2024-08-09 04:41:04 Allocating volume for drive 'scsi0' on remote storage 'rzcluster'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2024-08-09 04:41:04 volume 'rzcluster:vm-40516-disk-0' is 'rzcluster:vm-40516-disk-0' on the target
tunnel: -> sending command "config" to remote
tunnel: <- got reply
tunnel: -> sending command "start" to remote
tunnel: <- got reply
2024-08-09 04:41:05 Setting up tunnel for '/run/qemu-server/40516.migrate'
2024-08-09 04:41:05 ERROR: online migrate failure - failed to write forwarding command - Insecure dependency in unlink while running with -T switch at /usr/share/perl5/PVE/Tunnel.pm line 214, <GEN311> line 5.
2024-08-09 04:41:05 aborting phase 2 - cleanup resources
2024-08-09 04:41:05 migrate_cancel
tunnel: -> sending command "stop" to remote
tunnel: <- got reply
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2024-08-09 04:41:07 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems
please share the VM configuration
Code:
qm config 40516
and the migration commandline (if started via CLI) or parameters (if started via API).

PS: Please note that Proxmox VE 7 is end-of-life since the end of July: https://pve.proxmox.com/wiki/FAQ

EDIT: I could reproduce the issue now and will look into it. Thank you for the report!

EDIT 2: a fix has been proposed on the mailing list: https://lists.proxmox.com/pipermail/pve-devel/2024-August/065062.html
 
Last edited:
  • Like
Reactions: SRU
Hi,

please share the VM configuration
Code:
qm config 40516
and the migration commandline (if started via CLI) or parameters (if started via API).

PS: Please note that Proxmox VE 7 is end-of-life since the end of July: https://pve.proxmox.com/wiki/FAQ

EDIT: I could reproduce the issue now and will look into it. Thank you for the report!

EDIT 2: a fix has been proposed on the mailing list: https://lists.proxmox.com/pipermail/pve-devel/2024-August/065062.html
Thank you very much for the fast reaction and your efforts!