Hi,
I am facing a problem after the replication between two nodes hung up. I am now seeing the following error when trying to migrate the two containers which were being replicated when the replication stopped working. Please bear with me, I am long time Linux user, but new to zfs. Can you please provide a hint how to safely remove the stale copy of the VM?
This is the error I get (IP and names removed)
Thanks a lot for your help, it is highly appreciated!
Best regards!
I am facing a problem after the replication between two nodes hung up. I am now seeing the following error when trying to migrate the two containers which were being replicated when the replication stopped working. Please bear with me, I am long time Linux user, but new to zfs. Can you please provide a hint how to safely remove the stale copy of the VM?
This is the error I get (IP and names removed)
Bash:
2020-09-02 21:36:37 starting migration of CT 310 to node 'XX' (xxx.xxx.xxx.xxx)
2020-09-02 21:36:37 found local volume 'local-zfs:subvol-310-disk-0' (in current VM config)
2020-09-02 21:36:38 full send of rpool/data/subvol-310-disk-0@__migration__ estimated size is 2.42G
2020-09-02 21:36:38 total estimated size is 2.42G
2020-09-02 21:36:38 rpool/data/subvol-310-disk-0 name rpool/data/subvol-310-disk-0 -
2020-09-02 21:36:38 volume 'rpool/data/subvol-310-disk-0' already exists
2020-09-02 21:36:38 TIME SENT SNAPSHOT rpool/data/subvol-310-disk-0@__migration__
2020-09-02 21:36:38 command 'zfs send -Rpv -- rpool/data/subvol-310-disk-0@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2020-09-02 21:36:39 ERROR: storage migration for 'local-zfs:subvol-310-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export local-zfs:subvol-310-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=XX' root@xxx.xxx.xxx.xxx -- pvesm import local-zfs:subvol-310-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__ -allow-rename 0' failed: exit code 255
2020-09-02 21:36:39 aborting phase 1 - cleanup resources
2020-09-02 21:36:39 ERROR: found stale volume copy 'local-zfs:subvol-310-disk-0' on node 'XX'
2020-09-02 21:36:39 start final cleanup
2020-09-02 21:36:39 ERROR: migration aborted (duration 00:00:02): storage migration for 'local-zfs:subvol-310-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export local-zfs:subvol-310-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=XX' root@xxx.xxx.xxx.xxx -- pvesm import local-zfs:subvol-310-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__ -allow-rename 0' failed: exit code 255
TASK ERROR: migration aborted
Thanks a lot for your help, it is highly appreciated!
Best regards!