Migration fails after replication stopped working

Oct 22, 2009
92
1
26
Hi,

I recently reinstalled to PVE 5.1 and have been using a desktop as backup when doing changes to the server.
Migrating between the two has worked flawlessly (2-cluster with zfs), but for some reason the replication decided to send a full image while it already exists on the other side:
2017-11-17 07:06:59 204-1: start replication job
2017-11-17 07:06:59 204-1: guest => CT 204, running => 0
2017-11-17 07:06:59 204-1: volumes => local-zfs:subvol-204-disk-1
2017-11-17 07:07:00 204-1: create snapshot '__replicate_204-1_1510898819__' on local-zfs:subvol-204-disk-1
2017-11-17 07:07:00 204-1: full sync 'local-zfs:subvol-204-disk-1' (__replicate_204-1_1510898819__)
2017-11-17 07:07:00 204-1: full send of rpool/data/subvol-204-disk-1@__replicate_204-0_1510573211__ estimated size is 537M
2017-11-17 07:07:00 204-1: send from @__replicate_204-0_1510573211__ to rpool/data/subvol-204-disk-1@__replicate_204-1_1510898819__ estimated size is 1.60M
2017-11-17 07:07:00 204-1: total estimated size is 539M
2017-11-17 07:07:00 204-1: TIME SENT SNAPSHOT
2017-11-17 07:07:01 204-1: rpool/data/subvol-204-disk-1 name rpool/data/subvol-204-disk-1 -
2017-11-17 07:07:01 204-1: volume 'rpool/data/subvol-204-disk-1' already exists
2017-11-17 07:07:01 204-1: command 'zfs send -Rpv -- rpool/data/subvol-204-disk-1@__replicate_204-1_1510898819__' failed: got signal 13
2017-11-17 07:07:01 204-1: delete previous replication snapshot '__replicate_204-1_1510898819__' on local-zfs:subvol-204-disk-1
2017-11-17 07:07:01 204-1: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_204-1_1510898819__ | /usr/bin/cstream -t 10000000 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve' root@192.168.110.2 -- pvesm import local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1' failed: exit code 255

This unfortunately prevents me from migrating back to the server. I can remove the volume for this machine, but I have others in 3-500GB range which I would like not to "reset" in this way. Can I do anything to get the replication back to normal?

Thanks in advance,
Bo

Output from failed migration (but I suspect the failing replication above as root cause)
2017-11-17 07:06:53 shutdown CT 204
2017-11-17 07:06:59 starting migration of CT 204 to node 'pve' (192.168.110.2)
2017-11-17 07:06:59 found local volume 'local-zfs:subvol-204-disk-1' (in current VM config)
2017-11-17 07:06:59 start replication job
2017-11-17 07:06:59 guest => CT 204, running => 0
2017-11-17 07:06:59 volumes => local-zfs:subvol-204-disk-1
2017-11-17 07:07:00 create snapshot '__replicate_204-1_1510898819__' on local-zfs:subvol-204-disk-1
2017-11-17 07:07:00 full sync 'local-zfs:subvol-204-disk-1' (__replicate_204-1_1510898819__)
2017-11-17 07:07:00 full send of rpool/data/subvol-204-disk-1@__replicate_204-0_1510573211__ estimated size is 537M
2017-11-17 07:07:00 send from @__replicate_204-0_1510573211__ to rpool/data/subvol-204-disk-1@__replicate_204-1_1510898819__ estimated size is 1.60M
2017-11-17 07:07:00 total estimated size is 539M
2017-11-17 07:07:00 TIME SENT SNAPSHOT
2017-11-17 07:07:01 rpool/data/subvol-204-disk-1 name rpool/data/subvol-204-disk-1 -
2017-11-17 07:07:01 volume 'rpool/data/subvol-204-disk-1' already exists
2017-11-17 07:07:01 command 'zfs send -Rpv -- rpool/data/subvol-204-disk-1@__replicate_204-1_1510898819__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2017-11-17 07:07:01 delete previous replication snapshot '__replicate_204-1_1510898819__' on local-zfs:subvol-204-disk-1
2017-11-17 07:07:01 end replication job with error: command 'set -o pipefail && pvesm export local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_204-1_1510898819__ | /usr/bin/cstream -t 10000000 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve' root@192.168.110.2 -- pvesm import local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1' failed: exit code 255
2017-11-17 07:07:01 ERROR: command 'set -o pipefail && pvesm export local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_204-1_1510898819__ | /usr/bin/cstream -t 10000000 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve' root@192.168.110.2 -- pvesm import local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1' failed: exit code 255
2017-11-17 07:07:01 aborting phase 1 - cleanup resources
2017-11-17 07:07:01 start final cleanup
2017-11-17 07:07:01 ERROR: migration aborted (duration 00:00:08): command 'set -o pipefail && pvesm export local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_204-1_1510898819__ | /usr/bin/cstream -t 10000000 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve' root@192.168.110.2 -- pvesm import local-zfs:subvol-204-disk-1 zfs - -with-snapshots 1' failed: exit code 255
TASK ERROR: migration aborted

desktop pveversion:
proxmox-ve: 5.1-26 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.15-1-pve: 4.10.15-15
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9

Server pveversion (subscribed):
proxmox-ve: 5.1-26 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-26
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
 
Last edited:
Ended up renaming existing volumes on destination and doing a new replication before migrating. Everything went fine after renaming volumes and rescheduling replication (gui) and then doing migration also from the gui.

However, I might have stumbled upon unexpected behaviour when PVE replication cleans up volumes after replication/migration. I have described the two encounters below:

LXC 250 had an extra mount point. After the migration (gui) it seems the backup-volume I created (by renaming manually) was automatically deleted by PVE. This did not happen to the single mount point LXCs.

Zpool history follow below if it's any help:
2017-11-18.09:39:49 zfs rename rpool/data/subvol-250-disk-1 rpool/data/subvol-250-disk-1-backup
2017-11-18.09:39:57 zfs rename rpool/data/subvol-250-disk-2 rpool/data/subvol-250-disk-2-backup
2017-11-18.09:40:38 zfs recv -F -- rpool/data/subvol-241-disk-1
2017-11-18.09:41:04 zfs recv -F -- rpool/data/subvol-232-disk-1
2017-11-18.09:41:09 zfs destroy rpool/data/subvol-232-disk-1@__replicate_232-1_1510994073__
2017-11-18.09:41:57 zfs recv -F -- rpool/data/subvol-241-disk-1
2017-11-18.09:42:03 zfs destroy rpool/data/subvol-241-disk-1@__replicate_241-1_1510994373__
2017-11-18.09:43:13 zfs destroy rpool/data/subvol-232-disk-1-backup -r
2017-11-18.09:43:44 zfs destroy rpool/data/subvol-241-disk-1-backup -r
2017-11-18.09:47:00 zfs recv -F -- rpool/data/subvol-250-disk-1
2017-11-18.11:54:27 zfs get -o value -Hp available,used rpool/data
2017-11-18.12:58:37 zfs get -o value -Hp available,used rpool/data
2017-11-18.14:50:45 zfs recv -F -- rpool/data/subvol-250-disk-2
2017-11-18.15:19:39 zfs destroy rpool/data/subvol-250-disk-2-backup@__replicate_250-0_1510859698__
2017-11-18.15:19:41 zfs destroy -r rpool/data/subvol-250-disk-2-backup
2017-11-18.15:19:47 zfs recv -F -- rpool/data/subvol-250-disk-1
2017-11-18.15:19:52 zfs recv -F -- rpool/data/subvol-250-disk-2
2017-11-18.15:19:55 zfs destroy rpool/data/subvol-250-disk-1@__replicate_250-0_1510994760__
2017-11-18.15:20:02 zfs destroy rpool/data/subvol-250-disk-2@__replicate_250-0_1510994760__


Similarly it happened when replicating (VM 216). The "-backup" volume was automatically deleted next time the replication triggered (22:30):
2017-11-18.18:47:15 zfs rename rpool/data/subvol-216-disk-1 rpool/data/subvol-216-disk-1-backup
2017-11-18.18:47:26 zfs rename rpool/data/subvol-216-disk-2 rpool/data/subvol-216-disk-2-backup
2017-11-18.18:48:24 zfs recv -F -- rpool/data/subvol-216-disk-1
2017-11-18.22:30:03 zfs snapshot rpool/data/subvol-204-disk-1@__replicate_204-0_1511040601__
2017-11-18.22:30:04 zfs send -Rpv -I __replicate_204-0_1510992783__ -- rpool/data/subvol-204-disk-1@__replicate_204-0_1511040601__
2017-11-18.22:30:06 zfs destroy rpool/data/subvol-204-disk-1@__replicate_204-0_1510992783__
2017-11-18.22:30:07 zfs snapshot rpool/data/subvol-211-disk-1@__replicate_211-1_1511040605__
2017-11-18.22:30:08 zfs send -Rpv -I __replicate_211-1_1510993453__ -- rpool/data/subvol-211-disk-1@__replicate_211-1_1511040605__
2017-11-18.22:30:10 zfs destroy rpool/data/subvol-211-disk-1@__replicate_211-1_1510993453__
2017-11-18.22:30:11 zfs snapshot rpool/data/subvol-222-disk-1@__replicate_222-1_1511040609__
2017-11-18.22:30:12 zfs send -Rpv -I __replicate_222-1_1510993723__ -- rpool/data/subvol-222-disk-1@__replicate_222-1_1511040609__
2017-11-18.22:30:14 zfs destroy rpool/data/subvol-222-disk-1@__replicate_222-1_1510993723__
2017-11-18.22:30:15 zfs snapshot rpool/data/subvol-230-disk-1@__replicate_230-1_1511040613__
2017-11-18.22:30:18 zfs send -Rpv -I __replicate_230-1_1510994148__ -- rpool/data/subvol-230-disk-1@__replicate_230-1_1511040613__
2017-11-18.22:30:20 zfs destroy rpool/data/subvol-230-disk-1@__replicate_230-1_1510994148__
2017-11-18.22:30:21 zfs snapshot rpool/data/subvol-232-disk-1@__replicate_232-1_1511040619__
2017-11-18.22:30:22 zfs send -Rpv -I __replicate_232-1_1510994461__ -- rpool/data/subvol-232-disk-1@__replicate_232-1_1511040619__
2017-11-18.22:30:24 zfs destroy rpool/data/subvol-232-disk-1@__replicate_232-1_1510994461__
2017-11-18.22:30:25 zfs snapshot rpool/data/subvol-241-disk-1@__replicate_241-1_1511040623__
2017-11-18.22:30:26 zfs send -Rpv -I __replicate_241-1_1510994515__ -- rpool/data/subvol-241-disk-1@__replicate_241-1_1511040623__
2017-11-18.22:30:28 zfs destroy rpool/data/subvol-241-disk-1@__replicate_241-1_1510994515__
2017-11-18.22:30:28 zfs snapshot rpool/data/subvol-250-disk-1@__replicate_250-0_1511040627__
2017-11-18.22:30:29 zfs snapshot rpool/data/subvol-250-disk-2@__replicate_250-0_1511040627__
2017-11-18.22:30:30 zfs send -Rpv -I __replicate_250-0_1511014778__ -- rpool/data/subvol-250-disk-1@__replicate_250-0_1511040627__
2017-11-18.22:30:32 zfs send -Rpv -I __replicate_250-0_1511014778__ -- rpool/data/subvol-250-disk-2@__replicate_250-0_1511040627__
2017-11-18.22:30:32 zfs destroy rpool/data/subvol-250-disk-1@__replicate_250-0_1511014778__
2017-11-18.22:30:34 zfs destroy rpool/data/subvol-250-disk-2@__replicate_250-0_1511014778__
2017-11-18.22:35:14 zfs recv -F -- rpool/data/subvol-216-disk-2
2017-11-18.22:35:15 zfs destroy rpool/data/subvol-216-disk-2-backup@__replicate_216-0_1510859639__
2017-11-18.22:35:17 zfs destroy -r rpool/data/subvol-216-disk-2-backup
2017-11-18.22:35:22 zfs recv -F -- rpool/data/subvol-216-disk-1
2017-11-18.22:35:26 zfs recv -F -- rpool/data/subvol-216-disk-2
2017-11-18.22:35:28 zfs destroy rpool/data/subvol-216-disk-1@__replicate_216-0_1511027280__
2017-11-18.22:35:33 zfs destroy rpool/data/subvol-216-disk-2@__replicate_216-0_1511027280__
2017-11-19.02:30:02 zfs snapshot rpool/data/subvol-204-disk-1@__replicate_204-0_1511055000__
2017-11-19.02:30:02 zfs send -Rpv -I __replicate_204-0_1511040601__ -- rpool/data/subvol-204-disk-1@__replicate_204-0_1511055000__
2017-11-19.02:30:02 zfs recv -F -- rpool/data/subvol-216-disk-1
2017-11-19.02:30:03 zfs destroy rpool/data/subvol-204-disk-1@__replicate_204-0_1511040601__
2017-11-19.02:30:04 zfs recv -F -- rpool/data/subvol-216-disk-2
2017-11-19.02:30:04 zfs destroy rpool/data/subvol-216-disk-1@__replicate_216-0_1511040914__
2017-11-19.02:30:05 zfs destroy rpool/data/subvol-216-disk-2@__replicate_216-0_1511040914__
2017-11-19.02:30:05 zfs snapshot rpool/data/subvol-211-disk-1@__replicate_211-1_1511055003__
2017-11-19.02:30:05 zfs send -Rpv -I __replicate_211-1_1511040605__ -- rpool/data/subvol-211-disk-1@__replicate_211-1_1511055003__
2017-11-19.02:30:07 zfs destroy rpool/data/subvol-211-disk-1@__replicate_211-1_1511040605__
2017-11-19.02:30:08 zfs snapshot rpool/data/subvol-222-disk-1@__replicate_222-1_1511055006__
2017-11-19.02:30:08 zfs send -Rpv -I __replicate_222-1_1511040609__ -- rpool/data/subvol-222-disk-1@__replicate_222-1_1511055006__
2017-11-19.02:30:10 zfs destroy rpool/data/subvol-222-disk-1@__replicate_222-1_1511040609__
2017-11-19.02:30:10 zfs snapshot rpool/data/subvol-230-disk-1@__replicate_230-1_1511055009__
2017-11-19.02:30:11 zfs send -Rpv -I __replicate_230-1_1511040613__ -- rpool/data/subvol-230-disk-1@__replicate_230-1_1511055009__
2017-11-19.02:30:13 zfs destroy rpool/data/subvol-230-disk-1@__replicate_230-1_1511040613__
2017-11-19.02:30:14 zfs snapshot rpool/data/subvol-232-disk-1@__replicate_232-1_1511055012__
2017-11-19.02:30:14 zfs send -Rpv -I __replicate_232-1_1511040619__ -- rpool/data/subvol-232-disk-1@__replicate_232-1_1511055012__
2017-11-19.02:30:16 zfs destroy rpool/data/subvol-232-disk-1@__replicate_232-1_1511040619__
2017-11-19.02:30:16 zfs snapshot rpool/data/subvol-241-disk-1@__replicate_241-1_1511055015__
2017-11-19.02:30:17 zfs send -Rpv -I __replicate_241-1_1511040623__ -- rpool/data/subvol-241-disk-1@__replicate_241-1_1511055015__
2017-11-19.02:30:19 zfs destroy rpool/data/subvol-241-disk-1@__replicate_241-1_1511040623__
2017-11-19.02:30:19 zfs snapshot rpool/data/subvol-250-disk-1@__replicate_250-0_1511055018__
2017-11-19.02:30:19 zfs snapshot rpool/data/subvol-250-disk-2@__replicate_250-0_1511055018__
2017-11-19.02:30:20 zfs send -Rpv -I __replicate_250-0_1511040627__ -- rpool/data/subvol-250-disk-1@__replicate_250-0_1511055018__
2017-11-19.02:30:21 zfs send -Rpv -I __replicate_250-0_1511040627__ -- rpool/data/subvol-250-disk-2@__replicate_250-0_1511055018__
2017-11-19.02:30:21 zfs destroy rpool/data/subvol-250-disk-1@__replicate_250-0_1511040627__
2017-11-19.02:30:27 zfs destroy rpool/data/subvol-250-disk-2@__replicate_250-0_1511040627__

The deletion only happens on "*-disk-2-backup"-volumes. The renamed volumes for disk-1 survives after both replication and migration.
Is this expected behavior from replication?

Thanks in advance,
Bo
 
Hi,

The replication is not made for renamed volumes and *-disk-2-backup" is not a common volume name.