Hello,
I'm running a PVE 5.3-9 cluster without any shared filesystem (ceph, nfs, etc) but just a single device in LVM on each node, actually it should be pretty much default config. I want to offline migrate a lxc container from one node to another and this is what I keep getting:
The target node "prox06" was empty before the migration, so the already exists / found messages make no sense to me. It also makes no difference whether I use a LXC container or a VM, it's basically the same message.
I'll gladly provide further information, I'm thankful for any help
I'm running a PVE 5.3-9 cluster without any shared filesystem (ceph, nfs, etc) but just a single device in LVM on each node, actually it should be pretty much default config. I want to offline migrate a lxc container from one node to another and this is what I keep getting:
Code:
root@prox05:~# pct migrate 110 prox06
2019-03-18 16:01:25 starting migration of CT 110 to node 'prox06' (10.10.0.136)
2019-03-18 16:01:25 found local volume 'backup:vm-110-disk-0' (in current VM config)
2019-03-18 16:01:25 found local volume 'vg0:vm-110-disk-0' (via storage)
Logical volume "vm-110-disk-0" created.
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 57.7679 s, 74.3 MB/s
62+131078 records in
62+131078 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 60.0081 s, 71.6 MB/s
volume vg0/vm-110-disk-0 already exists
command 'dd 'if=/dev/vg0/vm-110-disk-0' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2019-03-18 16:02:26 ERROR: command 'set -o pipefail && pvesm export vg0:vm-110-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=prox06' root@10.10.0.136 -- pvesm import vg0:vm-110-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
2019-03-18 16:02:26 aborting phase 1 - cleanup resources
2019-03-18 16:02:26 ERROR: found stale volume copy 'backup:vm-110-disk-0' on node 'prox06'
2019-03-18 16:02:26 ERROR: found stale volume copy 'vg0:vm-110-disk-0' on node 'prox06'
2019-03-18 16:02:26 start final cleanup
2019-03-18 16:02:26 ERROR: migration aborted (duration 00:01:02): command 'set -o pipefail && pvesm export vg0:vm-110-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=prox06' root@10.10.0.136 -- pvesm import vg0:vm-110-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
migration aborted
The target node "prox06" was empty before the migration, so the already exists / found messages make no sense to me. It also makes no difference whether I use a LXC container or a VM, it's basically the same message.
I'll gladly provide further information, I'm thankful for any help