Remote Migration with Shared Storage not working?

jauling

Member
Feb 10, 2025
31
3
8
I've got two non-clustered nodes, that have the same mapped NFS storage locations. Trying to live migrate a VM between these two nodes. I'd think it should be just a memory copy, since the storage is in-place. But I see the remote-migrate task kick off by copying the disk image.

Code:
# qm remote-migrate 113 113 'apitoken=PVEAPIToken=root@pam!vm1-migrate=SECRET,host=10.4.2.99,fingerprint=FINGERPRINT' --target-bridge 1 --target-storage nfserver1 --online
Establishing API connection with remote at '10.4.2.99'
2026-02-15 01:24:38 conntrack state migration not supported or disabled, active connections might get dropped
2026-02-15 01:24:38 remote: started tunnel worker 'UPID:proxmox2:00255FCC:03BCAEC8:69911246:qmtunnel:113:root@pam!vm1-migrate:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2026-02-15 01:24:38 local WS tunnel version: 2
2026-02-15 01:24:38 remote WS tunnel version: 2
2026-02-15 01:24:38 minimum required WS tunnel version: 2
websocket tunnel started
2026-02-15 01:24:38 starting migration of VM 113 to node 'proxmox2' (10.4.2.99)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2026-02-15 01:24:38 found local disk 'nfserver1:113/vm-113-disk-0.qcow2' (attached)
2026-02-15 01:24:38 mapped: net0 from vmbr1 to vmbr1
2026-02-15 01:24:38 Allocating volume for drive 'scsi0' on remote storage 'nfserver1'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2026-02-15 01:24:39 volume 'nfserver1:113/vm-113-disk-0.qcow2' is 'nfserver1:113/vm-113-disk-1.qcow2' on the target
tunnel: -> sending command "config" to remote
tunnel: <- got reply
tunnel: -> sending command "start" to remote
tunnel: <- got reply
2026-02-15 01:24:41 Setting up tunnel for '/run/qemu-server/113.migrate'
2026-02-15 01:24:41 Setting up tunnel for '/run/qemu-server/113_nbd.migrate'
2026-02-15 01:24:41 starting storage migration
2026-02-15 01:24:41 scsi0: start migration to nbd:unix:/run/qemu-server/113_nbd.migrate:exportname=drive-scsi0
tunnel: accepted new connection on '/run/qemu-server/113_nbd.migrate'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/qemu-server/113_nbd.migrate'
drive mirror is starting for drive-scsi0
mirror-scsi0: transferred 93.0 MiB of 16.0 GiB (0.57%) in 1s
mirror-scsi0: transferred 184.0 MiB of 16.0 GiB (1.12%) in 2s
...
mirror-scsi0: transferred 10.8 GiB of 16.0 GiB (67.31%) in 2m 7s
mirror-scsi0: transferred 10.8 GiB of 16.0 GiB (67.55%) in 2m 8s
^Cmirror-scsi0: Cancelling block job

I tried using --target-storage 1 since I wasn't moving the storage, but it didn't like that and complained "remote migration requires explicit storage mapping!", so I was forced to explicitly pass the storage name. Storage migration kicked off and drive mirroring. The task started taking stupid long, so I cancelled it. I then unlocked the 113 VM on the target node and destroyed it, nbd.

Am I doing something wrong here? Or am I playing with a feature that isn't available yet? Maybe what I'm expecting doesn't work with disk images?

The quick way to migrate an offline VM between two non-clustered nodes that share storage is still to duplicate the VMID.conf file in /etc/pve/qemu-server/ on the target node, run qm rescan, then stop the VM on source and start it on the target. Downtime is effectively as fast as it takes to shutdown and boot up the VM.
 
Last edited:
Am I doing something wrong here?
No. But from my perspective it is expected behavior to copy the data from one single machine to another single machine.

The alternative is (of course) to build a cluster. Only then the relevant software knows, what "shared storage" means...
 
Thanks for the response.. I was hoping to avoid clustering for a 2 node home deployment, mainly due to the additional overhead and complexity. Then again, offline/manual migration seems like a simpler solution for my use case. Live migration (without storage migration) is cool though :cool:

I guess it's up to me if I can live with the unnecessary storage migration if online/live migration is more important.

I do remember doing an ls of the disk image location for my 113 VM during the storage migration, and saw that it indeed was doing a disk image copy from vm-113-disk-0.qcow2 to vm-113-disk-1.qcow2. I suppose using a different type of disk image wouldn't matter?
 
  • Like
Reactions: UdoB
and saw that it indeed was doing a disk image copy from vm-113-disk-0.qcow2 to vm-113-disk-1.qcow2. I suppose using a different type of disk image wouldn't matter?
For VM disks to not being copied the underlying storage needs to be officially tagged "shared". Compare your settings with the table: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types

And it needs to be actually configured accordingly. Just having "the same" storage on two nodes is not sufficient.
 
  • Like
Reactions: Johannes S
The alternative is (of course) to build a cluster. Only then the relevant software knows, what "shared storage" means...
Or using pve-zsync, but then you can't use your NFS share on your NAS and you won't have migration or failover like with a cluster. Instead you would replicate the vms/lxcs from one node to the other and would be able to launch them in case one node gets lost:
https://pve.proxmox.com/wiki/PVE-zsync
Another option might be configure the shared storage on both hosts, that it basically has the same name and ids, take care that no vm/lxc on both nodes share their id and then implement a automated sync of your vm/lxc configs to the target and do a manual shutdown/startup on the old and new host. In theory it should be possible to script something around it.
Regarding the cluster: If you want to go that route you need a qdevice, can your nfsserver host docker images? How do you host your nfs shares? On a dedicated NAS?
But for a cluster you should also have a dedicated cluster network and get additional complexity compared to a simple two-node-setup+nfsserver+a proxmox datacenter manager vm (if you want to remote-migrate without cli).
 
  • Like
Reactions: jauling and UdoB
Another option might be configure the shared storage on both hosts, that it basically has the same name and ids, take care that no vm/lxc on both nodes share their id and then implement a automated sync of your vm/lxc configs to the target and do a manual shutdown/startup on the old and new host. In theory it should be possible to script something around it.
Regarding the cluster: If you want to go that route you need a qdevice, can your nfsserver host docker images? How do you host your nfs shares? On a dedicated NAS?
But for a cluster you should also have a dedicated cluster network and get additional complexity compared to a simple two-node-setup+nfsserver+a proxmox datacenter manager vm (if you want to remote-migrate without cli).
The option you listed here I guess is for seamless offline migrations. That's something I probably will resort to, as it's the simplest. It's just too bad it means sidestepping live migrations.

For VM disks to not being copied the underlying storage needs to be officially tagged "shared". Compare your settings with the table: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types

And it needs to be actually configured accordingly. Just having "the same" storage on two nodes is not sufficient.
Of course I have my nfs storage tagged as shared on both source and target. At least, that's what the UI is showing. I noticed in the /etc/pve/storage.cfg file there is no explicit shared 1 line though, which I guess is strange. Is that default true?

Code:
nfs: nfserver1
        export /volume1/proxmox
        path /mnt/pve/nfserver1
        server 10.4.2.100
        content backup,images
        prune-backups keep-all=1

dir: isostore
        path /mnt/isostore
        content iso,images
        prune-backups keep-all=1
        shared 1
 
  • Like
Reactions: Johannes S
At least, that's what the UI is showing. I noticed in the /etc/pve/storage.cfg file there is no explicit shared 1 line though, which I guess is strange. Is that default true?
Unfortunately I have no NFS storage configured, so I can't confirm this.

But my only "cifs" share does confirm your observation: no explicit "shared 1" but it is shared.
 
Last edited:
Unfortunately I have no NFS storage configured, so I can't confirm this.

But my only "cifs" share does confirm your observation: no explicit "shared 1" but it is shared.
I think regardless of cifs or nfs, in a non-clustered configuration even with shared storage, online live migrations between nodes won't work without storage migration as you previously stated, right?
 
It looks like this was brought up a couple years ago, and a bugzilla was filed, but there has been (no noticeable) movement. Is it possible to bump?

https://bugzilla.proxmox.com/show_bug.cgi?id=4928