I've got two non-clustered nodes, that have the same mapped NFS storage locations. Trying to live migrate a VM between these two nodes. I'd think it should be just a memory copy, since the storage is in-place. But I see the remote-migrate task kick off by copying the disk image.
I tried using
Am I doing something wrong here? Or am I playing with a feature that isn't available yet? Maybe what I'm expecting doesn't work with disk images?
The quick way to migrate an offline VM between two non-clustered nodes that share storage is still to duplicate the VMID.conf file in
Code:
# qm remote-migrate 113 113 'apitoken=PVEAPIToken=root@pam!vm1-migrate=SECRET,host=10.4.2.99,fingerprint=FINGERPRINT' --target-bridge 1 --target-storage nfserver1 --online
Establishing API connection with remote at '10.4.2.99'
2026-02-15 01:24:38 conntrack state migration not supported or disabled, active connections might get dropped
2026-02-15 01:24:38 remote: started tunnel worker 'UPID:proxmox2:00255FCC:03BCAEC8:69911246:qmtunnel:113:root@pam!vm1-migrate:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2026-02-15 01:24:38 local WS tunnel version: 2
2026-02-15 01:24:38 remote WS tunnel version: 2
2026-02-15 01:24:38 minimum required WS tunnel version: 2
websocket tunnel started
2026-02-15 01:24:38 starting migration of VM 113 to node 'proxmox2' (10.4.2.99)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2026-02-15 01:24:38 found local disk 'nfserver1:113/vm-113-disk-0.qcow2' (attached)
2026-02-15 01:24:38 mapped: net0 from vmbr1 to vmbr1
2026-02-15 01:24:38 Allocating volume for drive 'scsi0' on remote storage 'nfserver1'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2026-02-15 01:24:39 volume 'nfserver1:113/vm-113-disk-0.qcow2' is 'nfserver1:113/vm-113-disk-1.qcow2' on the target
tunnel: -> sending command "config" to remote
tunnel: <- got reply
tunnel: -> sending command "start" to remote
tunnel: <- got reply
2026-02-15 01:24:41 Setting up tunnel for '/run/qemu-server/113.migrate'
2026-02-15 01:24:41 Setting up tunnel for '/run/qemu-server/113_nbd.migrate'
2026-02-15 01:24:41 starting storage migration
2026-02-15 01:24:41 scsi0: start migration to nbd:unix:/run/qemu-server/113_nbd.migrate:exportname=drive-scsi0
tunnel: accepted new connection on '/run/qemu-server/113_nbd.migrate'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/qemu-server/113_nbd.migrate'
drive mirror is starting for drive-scsi0
mirror-scsi0: transferred 93.0 MiB of 16.0 GiB (0.57%) in 1s
mirror-scsi0: transferred 184.0 MiB of 16.0 GiB (1.12%) in 2s
...
mirror-scsi0: transferred 10.8 GiB of 16.0 GiB (67.31%) in 2m 7s
mirror-scsi0: transferred 10.8 GiB of 16.0 GiB (67.55%) in 2m 8s
^Cmirror-scsi0: Cancelling block job
I tried using
--target-storage 1 since I wasn't moving the storage, but it didn't like that and complained "remote migration requires explicit storage mapping!", so I was forced to explicitly pass the storage name. Storage migration kicked off and drive mirroring. The task started taking stupid long, so I cancelled it. I then unlocked the 113 VM on the target node and destroyed it, nbd.Am I doing something wrong here? Or am I playing with a feature that isn't available yet? Maybe what I'm expecting doesn't work with disk images?
The quick way to migrate an offline VM between two non-clustered nodes that share storage is still to duplicate the VMID.conf file in
/etc/pve/qemu-server/ on the target node, run qm rescan, then stop the VM on source and start it on the target. Downtime is effectively as fast as it takes to shutdown and boot up the VM.
Last edited: