[SOLVED] VM volumes on wrong node

LordRatner

Member
Jun 20, 2022
50
13
8
Hi,

I've had a VM get rather messed up in a migration. I don't exactly know what happened, but the VM is now on Node1, but the volume datasets are on node2.

I can see the datasets with "zfs list" on node2. They don't exist on node1. I can't migrate the VM:

Code:
2023-01-09 22:59:32 starting migration of VM 120 to node 'node2' (192.168.10.21)
2023-01-09 22:59:33 found local disk 'local-zfs:vm-120-disk-0' (in current VM config)
2023-01-09 22:59:33 found local disk 'local-zfs:vm-120-state-vdi_trouble' (referenced by snapshot(s))
2023-01-09 22:59:33 copying local disk images
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 778.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/QemuMigrate.pm line 678.
2023-01-09 22:59:33 ERROR: storage migration for 'local-zfs:vm-120-disk-0' to storage '' failed - no storage ID specified
2023-01-09 22:59:33 aborting phase 1 - cleanup resources
2023-01-09 22:59:33 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-zfs:vm-120-disk-0' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted

I assume this is because the volumes aren't on the correct node. Obviously I can't start the VM either.

The VM disks are visible in the UI under local-zfs on node2.

Is there a way to get the volumes to the vm, or the vm to the volumes?

Thanks
Seth
 
but the VM is now on Node1, but the volume datasets are on node2.

Move the corresponding VM-config-file (e.g.: 120.conf) from: /etc/pve/nodes/YourNode1/qemu-server/ to: /etc/pve/nodes/YourNode2/qemu-server/. (Replace YourNodeX with your corresponding actual node names.)