Replication Failed and "state" snap finded

yena

Renowned Member
Nov 18, 2011
378
5
83
Hello,
in my cluster i have finded a vm with replication faild with this log:

--------------------------------------------------------------------------------
2020-11-02 12:54:01 105-0: start replication job
2020-11-02 12:54:01 105-0: guest => VM 105, running => 15447
2020-11-02 12:54:01 105-0: volumes => KVM:vm-105-disk-0,KVM:vm-105-state-kreando0910
2020-11-02 12:54:01 105-0: create snapshot '__replicate_105-0_1604318041__' on KVM:vm-105-disk-0
2020-11-02 12:54:03 105-0: create snapshot '__replicate_105-0_1604318041__' on KVM:vm-105-state-kreando0910
2020-11-02 12:54:03 105-0: using secure transmission, rate limit: none
2020-11-02 12:54:03 105-0: full sync 'KVM:vm-105-disk-0' (__replicate_105-0_1604318041__)
2020-11-02 12:54:04 105-0: full send of STORAGE/KVM/vm-105-disk-0@kreando0910 estimated size is 494G
2020-11-02 12:54:04 105-0: send from @kreando0910 to STORAGE/KVM/vm-105-disk-0@autodaily201019052801 estimated size is 56.7G
2020-11-02 12:54:04 105-0: send from @autodaily201019052801 to STORAGE/KVM/vm-105-disk-0@autodaily201020052835 estimated size is 7.69G
2020-11-02 12:54:04 105-0: send from @autodaily201020052835 to STORAGE/KVM/vm-105-disk-0@autodaily201021052900 estimated size is 7.43G
......
2020-11-02 12:54:04 105-0: send from @autodaily201101052917 to STORAGE/KVM/vm-105-disk-0@autodaily201102052831 estimated size is 5.07G
2020-11-02 12:54:04 105-0: send from @autodaily201102052831 to STORAGE/KVM/vm-105-disk-0@__replicate_105-0_1604318041__ estimated size is 1.29G
2020-11-02 12:54:04 105-0: total estimated size is 689G
2020-11-02 12:54:04 105-0: STORAGE/KVM/vm-105-disk-0 name STORAGE/KVM/vm-105-disk-0 -
2020-11-02 12:54:04 105-0: volume 'STORAGE/KVM/vm-105-disk-0' already exists
2020-11-02 12:54:04 105-0: TIME SENT SNAPSHOT STORAGE/KVM/vm-105-disk-0@kreando0910
2020-11-02 12:54:04 105-0: warning: cannot send 'STORAGE/KVM/vm-105-disk-0@kreando0910': Broken pipe
2020-11-02 12:54:04 105-0: TIME SENT SNAPSHOT STORAGE/KVM/vm-105-disk-0@autodaily201019052801
2020-11-02 12:54:04 105-0: warning: cannot send 'STORAGE/KVM/vm-105-disk-0@autodaily201019052801': Broken pipe
2020-11-02 12:54:04 105-0: TIME SENT SNAPSHOT STORAGE/KVM/vm-105-disk-0@autodaily201020052835
......
2020-11-02 12:54:04 105-0: TIME SENT SNAPSHOT STORAGE/KVM/vm-105-disk-0@autodaily201102052831
2020-11-02 12:54:04 105-0: warning: cannot send 'STORAGE/KVM/vm-105-disk-0@autodaily201102052831': Broken pipe
2020-11-02 12:54:04 105-0: TIME SENT SNAPSHOT STORAGE/KVM/vm-105-disk-0@__replicate_105-0_1604318041__
2020-11-02 12:54:04 105-0: warning: cannot send 'STORAGE/KVM/vm-105-disk-0@__replicate_105-0_1604318041__': Broken pipe
2020-11-02 12:54:04 105-0: cannot send 'STORAGE/KVM/vm-105-disk-0': I/O error
2020-11-02 12:54:04 105-0: command 'zfs send -Rpv -- STORAGE/KVM/vm-105-disk-0@__replicate_105-0_1604318041__' failed: exit code 1
2020-11-02 12:54:04 105-0: delete previous replication snapshot '__replicate_105-0_1604318041__' on KVM:vm-105-disk-0
2020-11-02 12:54:04 105-0: delete previous replication snapshot '__replicate_105-0_1604318041__' on KVM:vm-105-state-kreando0910
2020-11-02 12:54:04 105-0: end replication job with error: command 'set -o pipefail && pvesm export KVM:vm-105-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_105-0_1604318041__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=stg1msw' root@89.40.227.32 -- pvesm import KVM:vm-105-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 255
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Lising Dataset snap on my storage i can see:
STORAGE/KVM/vm-105-state-kreando0910 15.3G 6.49T 15.3G -
STORAGE/KVM/vm-105-state-kreando0910@__replicate_105-0_1603805456__ 0B - 15.3G -

What is vm-105-state ? normally i have vm-105-disk ..

Thanks!
 
Hi,

this looks like the dest side has a writing problem.

please check the available space on the dest side and also the pool health.
 
Hi,
the vm-105-state-kreando0910 volume comes from creating a snapshot and contains the RAM and state of the VM. Did the replication start failing after you did a rollback? For me that causes some problem, I created a bug report. As a workaround I suggest re-creating the replication job.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!