Replication error

karthikraja.a

New Member
Dec 26, 2023
5
0
1
end replication job with error: command 'set -o pipefail && pvesm export local-zfs:base-9001-disk-0/vm-190-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_190-0_1702570986__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=dsco-vm-02' root@192.168.3.62 -- pvesm import local-zfs:base-9001-disk-0/vm-190-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_190-0_1702570986__ -allow-rename 0' failed: exit code 1


I am not able to do replication of vm form one server to other and getting the above error. What can i do to rectify this error and replicate my vm.
 
start replication job
2023-12-26 17:19:04 192-0: guest => VM 192, running => 5688
2023-12-26 17:19:04 192-0: volumes => local-zfs:base-9002-disk-0/vm-192-disk-1,local-zfs:base-9002-disk-1/vm-192-disk-0,local-zfs:vm-192-disk-2,local-zfs:vm-192-state-S2023_12_18_10_54,local-zfs:vm-192-state-S_2023_12_23_11_58
2023-12-26 17:19:06 192-0: create snapshot '__replicate_192-0_1703571544__' on local-zfs:base-9002-disk-0/vm-192-disk-1
2023-12-26 17:19:06 192-0: create snapshot '__replicate_192-0_1703571544__' on local-zfs:base-9002-disk-1/vm-192-disk-0
2023-12-26 17:19:07 192-0: create snapshot '__replicate_192-0_1703571544__' on local-zfs:vm-192-disk-2
2023-12-26 17:19:07 192-0: create snapshot '__replicate_192-0_1703571544__' on local-zfs:vm-192-state-S2023_12_18_10_54
2023-12-26 17:19:07 192-0: create snapshot '__replicate_192-0_1703571544__' on local-zfs:vm-192-state-S_2023_12_23_11_58
2023-12-26 17:19:07 192-0: using secure transmission, rate limit: none
2023-12-26 17:19:07 192-0: full sync 'local-zfs:base-9002-disk-0/vm-192-disk-1' (__replicate_192-0_1703571544__)
2023-12-26 17:19:08 192-0: full send of rpool/data/vm-192-disk-1@S2023_12_18_10_54 estimated size is 7.65G
2023-12-26 17:19:08 192-0: send from @S2023_12_18_10_54 to rpool/data/vm-192-disk-1@S_2023_12_23_11_58 estimated size is 894M
2023-12-26 17:19:08 192-0: send from @S_2023_12_23_11_58 to rpool/data/vm-192-disk-1@__replicate_192-0_1703571544__ estimated size is 483M
2023-12-26 17:19:08 192-0: total estimated size is 9.00G
2023-12-26 17:19:08 192-0: TIME SENT SNAPSHOT rpool/data/vm-192-disk-1@S2023_12_18_10_54
2023-12-26 17:19:09 192-0: cannot receive: local origin for clone rpool/data/vm-192-disk-1@S2023_12_18_10_54 does not exist
2023-12-26 17:19:09 192-0: cannot open 'rpool/data/vm-192-disk-1': dataset does not exist
2023-12-26 17:19:09 192-0: command 'zfs recv -F -- rpool/data/vm-192-disk-1' failed: exit code 1
2023-12-26 17:19:09 192-0: warning: cannot send 'rpool/data/vm-192-disk-1@S2023_12_18_10_54': signal received
2023-12-26 17:19:09 192-0: TIME SENT SNAPSHOT rpool/data/vm-192-disk-1@S_2023_12_23_11_58
2023-12-26 17:19:09 192-0: warning: cannot send 'rpool/data/vm-192-disk-1@S_2023_12_23_11_58': Broken pipe
2023-12-26 17:19:09 192-0: TIME SENT SNAPSHOT rpool/data/vm-192-disk-1@__replicate_192-0_1703571544__
2023-12-26 17:19:09 192-0: warning: cannot send 'rpool/data/vm-192-disk-1@__replicate_192-0_1703571544__': Broken pipe
2023-12-26 17:19:09 192-0: cannot send 'rpool/data/vm-192-disk-1': I/O error
2023-12-26 17:19:09 192-0: command 'zfs send -Rpv -- rpool/data/vm-192-disk-1@__replicate_192-0_1703571544__' failed: exit code 1
2023-12-26 17:19:09 192-0: delete previous replication snapshot '__replicate_192-0_1703571544__' on local-zfs:base-9002-disk-0/vm-192-disk-1
2023-12-26 17:19:09 192-0: delete previous replication snapshot '__replicate_192-0_1703571544__' on local-zfs:base-9002-disk-1/vm-192-disk-0
2023-12-26 17:19:09 192-0: delete previous replication snapshot '__replicate_192-0_1703571544__' on local-zfs:vm-192-disk-2
2023-12-26 17:19:09 192-0: delete previous replication snapshot '__replicate_192-0_1703571544__' on local-zfs:vm-192-state-S2023_12_18_10_54
2023-12-26 17:19:09 192-0: delete previous replication snapshot '__replicate_192-0_1703571544__' on local-zfs:vm-192-state-S_2023_12_23_11_58
2023-12-26 17:19:09 192-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:base-9002-disk-0/vm-192-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_192-0_1703571544__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=dsco-vm-01' root@192.168.3.61 -- pvesm import local-zfs:base-9002-disk-0/vm-192-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_192-0_1703571544__ -allow-rename 0' failed: exit code 1




This is the full log of the replication error. What can be the solution for it
?
 
Hi,
Not sure if you solved your issue already, but I was facing similar problem and the root cause in my case was that I tried to replicate VM from a template. I can see in your log following:
Code:
2023-12-26 17:19:07 192-0: full sync 'local-zfs:base-9002-disk-0/vm-192-disk-1' (__replicate_192-0_1703571544__)
Take a look at the base disk part:
Code:
base-9002-disk-0
You need to replicate it first and then you should be able to replicate the target disk:
Code:
vm-192-disk-1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!