Hi, unfortunately error occurs pretty soon
So , I set new repl job up and it worked correctly for a while , but after some time I started with disaster scenarios like shutting destination host down and leave it down for couple repl jobs to occur. After powering destination host on I got this error message (no matter VM was running or not) :
Running VM:
2018-06-25 20:16:02 70020-0: start replication job
2018-06-25 20:16:02 70020-0: guest => VM 70020, running => 12087
2018-06-25 20:16:02 70020-0: volumes => zfs1:vm-70020-disk-1
2018-06-25 20:16:04 70020-0: create snapshot '__replicate_70020-0_1529950562__' on zfs1:vm-70020-disk-1
2018-06-25 20:16:05 70020-0: full sync 'zfs1:vm-70020-disk-1' (__replicate_70020-0_1529950562__)
2018-06-25 20:16:07 70020-0: full send of pool1zfs/vm-70020-disk-1@__replicate_70020-0_1529950562__ estimated size is 3.10G
2018-06-25 20:16:07 70020-0: total estimated size is 3.10G
2018-06-25 20:16:07 70020-0: pool1zfs/vm-70020-disk-1 name pool1zfs/vm-70020-disk-1 -
2018-06-25 20:16:07 70020-0: volume 'pool1zfs/vm-70020-disk-1' already exists
2018-06-25 20:16:07 70020-0: TIME SENT SNAPSHOT
2018-06-25 20:16:07 70020-0: warning: cannot send 'pool1zfs/vm-70020-disk-1@__replicate_70020-0_1529950562__': signal received
2018-06-25 20:16:07 70020-0: cannot send 'pool1zfs/vm-70020-disk-1': I/O error
2018-06-25 20:16:07 70020-0: command 'zfs send -Rpv -- pool1zfs/vm-70020-disk-1@__replicate_70020-0_1529950562__' failed: exit code 1
2018-06-25 20:16:07 70020-0: delete previous replication snapshot '__replicate_70020-0_1529950562__' on zfs1:vm-70020-disk-1
2018-06-25 20:16:07 70020-0: end replication job with error: command 'set -o pipefail && pvesm export zfs1:vm-70020-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_70020-0_1529950562__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pvesuma02' root@10.20.28.3 -- pvesm import zfs1:vm-70020-disk-1 zfs - -with-snapshots 1' failed: exit code 255
Stoppped VM:
2018-06-25 20:24:02 70020-0: start replication job
2018-06-25 20:24:02 70020-0: guest => VM 70020, running => 0
2018-06-25 20:24:02 70020-0: volumes => zfs1:vm-70020-disk-1
2018-06-25 20:24:04 70020-0: create snapshot '__replicate_70020-0_1529951042__' on zfs1:vm-70020-disk-1
2018-06-25 20:24:04 70020-0: full sync 'zfs1:vm-70020-disk-1' (__replicate_70020-0_1529951042__)
2018-06-25 20:24:05 70020-0: full send of pool1zfs/vm-70020-disk-1@__replicate_70020-0_1529951042__ estimated size is 3.13G
2018-06-25 20:24:05 70020-0: total estimated size is 3.13G
2018-06-25 20:24:06 70020-0: TIME SENT SNAPSHOT
2018-06-25 20:24:06 70020-0: pool1zfs/vm-70020-disk-1 name pool1zfs/vm-70020-disk-1 -
2018-06-25 20:24:06 70020-0: volume 'pool1zfs/vm-70020-disk-1' already exists
2018-06-25 20:24:06 70020-0: warning: cannot send 'pool1zfs/vm-70020-disk-1@__replicate_70020-0_1529951042__': signal received
2018-06-25 20:24:06 70020-0: cannot send 'pool1zfs/vm-70020-disk-1': I/O error
2018-06-25 20:24:06 70020-0: command 'zfs send -Rpv -- pool1zfs/vm-70020-disk-1@__replicate_70020-0_1529951042__' failed: exit code 1
2018-06-25 20:24:06 70020-0: delete previous replication snapshot '__replicate_70020-0_1529951042__' on zfs1:vm-70020-disk-1
2018-06-25 20:24:06 70020-0: end replication job with error: command 'set -o pipefail && pvesm export zfs1:vm-70020-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_70020-0_1529951042__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pvesuma02' root@10.20.28.3 -- pvesm import zfs1:vm-70020-disk-1 zfs - -with-snapshots 1' failed: exit code 255
Hopefully you find something crucial ...
Thank you in advance
and
BR
Tonci