I have a Proxmox cluster with 6 nodes sharing an LVM on a single large iSCSI LUN.
My intention is to be able to start up my VMs on any of the 6 nodes, given that there is enough resources available. Now I have a VM set up on node A, and I'm confused about how to move it to node B.
1. If I try to migrate I get the following error:
2017-09-20 19:05:45 starting migration of VM 701 to node 'rbgpu6' (91.90.42.46)
2017-09-20 19:05:47 found local disk 'test1:vm-701-disk-1' (in current VM config)
2017-09-20 19:05:47 copying disk images
volume testgrp/vm-701-disk-1 already exists
command 'dd 'if=/dev/testgrp/vm-701-disk-1' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2017-09-20 19:05:47 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export test1:vm-701-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=rbgpu6' root@91.90.42.46 -- pvesm import test1:vm-701-disk-1 raw+size - -with-snapshots 0' failed: exit code 255
2017-09-20 19:05:47 aborting phase 1 - cleanup resources
2017-09-20 19:05:47 ERROR: found stale volume copy 'test1:vm-701-disk-1' on node 'rbgpu6'
2017-09-20 19:05:47 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export test1:vm-701-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=rbgpu6' root@91.90.42.46 -- pvesm import test1:vm-701-disk-1 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted
2. If I try to clone non of the other nodes are accepted as target node.
I have the feeling I'm missing something obvious, since I haven't been able to find a solution over google/forum search.
My intention is to be able to start up my VMs on any of the 6 nodes, given that there is enough resources available. Now I have a VM set up on node A, and I'm confused about how to move it to node B.
1. If I try to migrate I get the following error:
2017-09-20 19:05:45 starting migration of VM 701 to node 'rbgpu6' (91.90.42.46)
2017-09-20 19:05:47 found local disk 'test1:vm-701-disk-1' (in current VM config)
2017-09-20 19:05:47 copying disk images
volume testgrp/vm-701-disk-1 already exists
command 'dd 'if=/dev/testgrp/vm-701-disk-1' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2017-09-20 19:05:47 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export test1:vm-701-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=rbgpu6' root@91.90.42.46 -- pvesm import test1:vm-701-disk-1 raw+size - -with-snapshots 0' failed: exit code 255
2017-09-20 19:05:47 aborting phase 1 - cleanup resources
2017-09-20 19:05:47 ERROR: found stale volume copy 'test1:vm-701-disk-1' on node 'rbgpu6'
2017-09-20 19:05:47 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export test1:vm-701-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=rbgpu6' root@91.90.42.46 -- pvesm import test1:vm-701-disk-1 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted
2. If I try to clone non of the other nodes are accepted as target node.
I have the feeling I'm missing something obvious, since I haven't been able to find a solution over google/forum search.