Hello,
I am migrating containers deployed as either CentOS or Debian. When I choose migrate to another node (done either via GUI or console) it always fails with:
Any idea hot to actually move those to new node in cluster? thx!
PS: and not, there is no volume on pve2 - I checked and made sure, it get there only after the copy is finished and the migration starts
I am migrating containers deployed as either CentOS or Debian. When I choose migrate to another node (done either via GUI or console) it always fails with:
Code:
2020-11-13 21:57:03 21:57:03 774M prod_storage/subvol-125-disk-0@__migration__
2020-11-13 21:57:04 successfully imported 'prod_storage:subvol-125-disk-0'
2020-11-13 21:57:05 full send of prod_storage/subvol-125-disk-0@__migration__ estimated size is 822M
2020-11-13 21:57:05 total estimated size is 822M
2020-11-13 21:57:05 volume 'prod_storage/subvol-125-disk-0' already exists
2020-11-13 21:57:05 TIME SENT SNAPSHOT prod_storage/subvol-125-disk-0@__migration__
2020-11-13 21:57:05 command 'zfs send -Rpv -- prod_storage/subvol-125-disk-0@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2020-11-13 21:57:05 ERROR: storage migration for 'prod_storage_pve2:subvol-125-disk-0' to storage 'prod_storage_pve2' failed - command 'set -o pipefail && pvesm export prod_storage_pve2:subvol-125-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@10.0.0.54 -- pvesm import prod_storage_pve2:subvol-125-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__ -allow-rename 0' failed: exit code 255
2020-11-13 21:57:05 aborting phase 1 - cleanup resources
2020-11-13 21:57:05 ERROR: found stale volume copy 'prod_storage:subvol-125-disk-0' on node 'pve2'
2020-11-13 21:57:05 ERROR: found stale volume copy 'prod_storage_pve2:subvol-125-disk-0' on node 'pve2'
2020-11-13 21:57:05 start final cleanup
2020-11-13 21:57:05 ERROR: migration aborted (duration 00:00:11): storage migration for 'prod_storage_pve2:subvol-125-disk-0' to storage 'prod_storage_pve2' failed - command 'set -o pipefail && pvesm export prod_storage_pve2:subvol-125-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@10.0.0.54 -- pvesm import prod_storage_pve2:subvol-125-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__ -allow-rename 0' failed: exit code 255
TASK ERROR: migration aborted
Any idea hot to actually move those to new node in cluster? thx!
PS: and not, there is no volume on pve2 - I checked and made sure, it get there only after the copy is finished and the migration starts