Hi,
I'm trying to cold-migrate an LXC container from a host to another.
The LXC I want to migrate is located on a "local-zfs" pool created on the first node and enabled on every cluster node.
Here is the migration log:
It seems that it finds the container's disk in two different ZFS pools, which is not the case (it's only in local-zfs).
Do you have any clue?
Regards
I'm trying to cold-migrate an LXC container from a host to another.
The LXC I want to migrate is located on a "local-zfs" pool created on the first node and enabled on every cluster node.
Here is the migration log:
Code:
2018-08-31 10:12:54 starting migration of CT 162 to node 'proxmox5-staging-02' (192.168.10.51)
2018-08-31 10:12:54 found local volume 'local-zfs-pm502:subvol-162-disk-1' (via storage)
2018-08-31 10:12:54 found local volume 'local-zfs:subvol-162-disk-1' (in current VM config)
full send of rpool/data/subvol-162-disk-1@__migration__ estimated size is 1.21G
total estimated size is 1.21G
TIME SENT SNAPSHOT
10:12:55 96.3M rpool/data/subvol-162-disk-1@__migration__
10:12:56 207M rpool/data/subvol-162-disk-1@__migration__
10:12:57 319M rpool/data/subvol-162-disk-1@__migration__
10:12:58 430M rpool/data/subvol-162-disk-1@__migration__
10:12:59 541M rpool/data/subvol-162-disk-1@__migration__
10:13:00 653M rpool/data/subvol-162-disk-1@__migration__
10:13:01 761M rpool/data/subvol-162-disk-1@__migration__
10:13:02 872M rpool/data/subvol-162-disk-1@__migration__
10:13:03 982M rpool/data/subvol-162-disk-1@__migration__
10:13:04 1.07G rpool/data/subvol-162-disk-1@__migration__
10:13:05 1.15G rpool/data/subvol-162-disk-1@__migration__
10:13:06 1.24G rpool/data/subvol-162-disk-1@__migration__
full send of rpool/data/subvol-162-disk-1@__migration__ estimated size is 1.21G
total estimated size is 1.21G
TIME SENT SNAPSHOT
rpool/data/subvol-162-disk-1 name rpool/data/subvol-162-disk-1 -
volume 'rpool/data/subvol-162-disk-1' already exists
command 'zfs send -Rpv -- rpool/data/subvol-162-disk-1@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2018-08-31 10:13:07 ERROR: command 'set -o pipefail && pvesm export local-zfs:subvol-162-disk-1 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox5-staging-02' root@192.168.10.51 -- pvesm import local-zfs:subvol-162-disk-1 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255
2018-08-31 10:13:07 aborting phase 1 - cleanup resources
2018-08-31 10:13:07 ERROR: found stale volume copy 'local-zfs-pm502:subvol-162-disk-1' on node 'proxmox5-staging-02'
2018-08-31 10:13:07 ERROR: found stale volume copy 'local-zfs:subvol-162-disk-1' on node 'proxmox5-staging-02'
2018-08-31 10:13:07 start final cleanup
2018-08-31 10:13:07 ERROR: migration aborted (duration 00:00:13): command 'set -o pipefail && pvesm export local-zfs:subvol-162-disk-1 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox5-staging-02' root@192.168.10.51 -- pvesm import local-zfs:subvol-162-disk-1 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255
TASK ERROR: migration aborted
It seems that it finds the container's disk in two different ZFS pools, which is not the case (it's only in local-zfs).
Code:
2018-08-31 10:12:54 found local volume 'local-zfs-pm502:subvol-162-disk-1' (via storage)
2018-08-31 10:12:54 found local volume 'local-zfs:subvol-162-disk-1' (in current VM config)
Do you have any clue?
Regards