Replication doing full sync instead of incremental sync

Petr.114

Active Member
Jun 25, 2019
35
2
28
32
Hello, we are facing problems with replication.
Like once a month, our container replication is doing "full sync" instead of "incremental sync".
The container have few hundreds of GB so its really annoying.
Why it is doing full sync when initial full sync already happened.

Thanks for any advice.
root@prox1-brno:~# pveversion -v
proxmox-ve: 6.2-2 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-4.15: 5.4-14
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-26-pve: 4.15.18-54
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.2-1
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-2
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
arch: amd64
cores: 1
hostname: samba-brno
memory: 1024
mp0: local-zfs:subvol-705-disk-1,mp=/mnt/data,backup=1,size=600G
nameserver: 192.168.7.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.7.1,hwaddr=52:36:F9:0F:AD:28,ip=192.168.7.5/32,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-705-disk-0,size=10G
searchdomain: lan.cutter.cz
startup: order=1,up=90
swap: 1024
unprivileged: 1
2020-09-23 18:00:02 705-0: start replication job
2020-09-23 18:00:02 705-0: guest => CT 705, running => 1
2020-09-23 18:00:02 705-0: volumes => local-zfs:subvol-705-disk-0,local-zfs:subvol-705-disk-1
2020-09-23 18:00:10 705-0: freeze guest filesystem
2020-09-23 18:00:10 705-0: create snapshot '__replicate_705-0_1600876800__' on local-zfs:subvol-705-disk-0
2020-09-23 18:00:10 705-0: create snapshot '__replicate_705-0_1600876800__' on local-zfs:subvol-705-disk-1
2020-09-23 18:00:11 705-0: thaw guest filesystem
2020-09-23 18:00:11 705-0: using secure transmission, rate limit: none
2020-09-23 18:00:11 705-0: full sync 'local-zfs:subvol-705-disk-0' (__replicate_705-0_1600876800__)
2020-09-23 18:00:12 705-0: full send of rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__ estimated size is 1.03G
2020-09-23 18:00:12 705-0: send from @__replicate_705-0_1600869659__ to rpool/data/subvol-705-disk-0@__replicate_705-0_1600876800__ estimated size is 328K
2020-09-23 18:00:12 705-0: total estimated size is 1.03G
2020-09-23 18:00:13 705-0: TIME SENT SNAPSHOT rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:00:14 705-0: 18:00:14 3.03M rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:00:15 705-0: 18:00:15 3.03M rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:00:16 705-0: 18:00:16 3.03M rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:00:17 705-0: 18:00:17 3.03M rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:00:18 705-0: 18:00:18 5.90M rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
-
2020-09-23 18:08:52 705-0: 18:08:52 1.05G rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:08:53 705-0: 18:08:53 1.05G rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:08:54 705-0: 18:08:54 1.05G rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:08:55 705-0: 18:08:55 1.05G rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:08:56 705-0: 18:08:56 1.05G rpool/data/subvol-705-disk-0@__replicate_705-0_1600869659__
2020-09-23 18:08:57 705-0: TIME SENT SNAPSHOT rpool/data/subvol-705-disk-0@__replicate_705-0_1600876800__
2020-09-23 18:09:00 705-0: successfully imported 'local-zfs:subvol-705-disk-0'
2020-09-23 18:09:00 705-0: full sync 'local-zfs:subvol-705-disk-1' (__replicate_705-0_1600876800__)
2020-09-23 18:09:01 705-0: full send of rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__ estimated size is 319G
2020-09-23 18:09:01 705-0: send from @__replicate_705-0_1600869659__ to rpool/data/subvol-705-disk-1@__replicate_705-0_1600876800__ estimated size is 624B
2020-09-23 18:09:01 705-0: total estimated size is 319G
2020-09-23 18:09:01 705-0: TIME SENT SNAPSHOT rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-23 18:09:02 705-0: 18:09:02 3.03M rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-23 18:09:03 705-0: 18:09:03 5.17M rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-23 18:09:04 705-0: 18:09:04 7.04M rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-23 18:09:05 705-0: 18:09:05 9.05M rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-23 18:09:06 705-0: 18:09:06 11.1M rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
-
2020-09-25 10:42:27 705-0: 10:42:27 250G rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-25 10:42:28 705-0: 10:42:28 250G rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-25 10:42:29 705-0: 10:42:29 250G rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-25 10:42:30 705-0: 10:42:30 250G rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
2020-09-25 10:42:31 705-0: 10:42:31 250G rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__
 
Hi,

this happens if the copies are different and have not the same snapshots.
You can compare snapshots with this command

Code:
zfs send rpool/data/subvol-705-disk-1@__replicate_705-0_1600869659__ | zstreamdump

Also, compare the snapshots on both sides with list
 
Hello and thank you for response.

I compared snapshots on main host and replicated host, both are the same, but new replication already did ran, so its probably pointless to compare it right now, when it works...i try it when it "fails".

What do you mean by "compare on both sides with list"?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!