Replication very long

Apr 12, 2018
31
2
13
37
The duration of the replication was only a few seconds, and since two days, it lasts several hours.
On all my VMs.
Any idea ?

Exemple of replication log :
Code:
2018-06-19 10:08:13 104-0: start replication job
2018-06-19 10:08:16 104-0: guest => VM 104, running => 25050
2018-06-19 10:08:16 104-0: volumes => stockage_zfs:vm-104-disk-1,stockage_zfs:vm-104-disk-2
2018-06-19 10:08:17 104-0: create snapshot '__replicate_104-0_1529395693__' on stockage_zfs:vm-104-disk-1
2018-06-19 10:08:17 104-0: create snapshot '__replicate_104-0_1529395693__' on stockage_zfs:vm-104-disk-2
2018-06-19 10:08:17 104-0: incremental sync 'stockage_zfs:vm-104-disk-1' (__replicate_104-0_1529292179__ => __replicate_104-0_1529395693__)
2018-06-19 10:08:18 104-0: stockage_zfs/vm-104-disk-1@__replicate_104-0_1529292179__    name    stockage_zfs/vm-104-disk-1@__replicate_104-0_1529292179__    -
2018-06-19 10:08:18 104-0: send from @__replicate_104-0_1529292179__ to stockage_zfs/vm-104-disk-1@__replicate_104-0_1529395693__ estimated size is 624B
2018-06-19 10:08:18 104-0: total estimated size is 624B
2018-06-19 10:08:18 104-0: TIME        SENT   SNAPSHOT
2018-06-19 10:08:18 104-0: incremental sync 'stockage_zfs:vm-104-disk-2' (__replicate_104-0_1529292179__ => __replicate_104-0_1529395693__)
2018-06-19 10:08:19 104-0: stockage_zfs/vm-104-disk-2@__replicate_104-0_1529292179__    name    stockage_zfs/vm-104-disk-2@__replicate_104-0_1529292179__    -
2018-06-19 10:08:19 104-0: send from @__replicate_104-0_1529292179__ to stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__ estimated size is 5.91G
2018-06-19 10:08:19 104-0: total estimated size is 5.91G
2018-06-19 10:08:19 104-0: TIME        SENT   SNAPSHOT
2018-06-19 10:08:20 104-0: 10:08:20 2.04M stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 10:08:21 104-0: 10:08:21 2.15M stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 10:08:22 104-0: 10:08:22 2.15M stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 10:08:23 104-0: 10:08:23 2.61M stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 10:08:24 104-0: 10:08:24 2.85M stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 10:08:25 104-0: 10:08:25 3.10M stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
.....................................................
Many lines
.....................................................
2018-06-19 19:21:14 104-0: 19:21:14   5.96G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:15 104-0: 19:21:15   5.96G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:16 104-0: 19:21:16   5.96G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:17 104-0: 19:21:17   5.96G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:18 104-0: 19:21:18   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:19 104-0: 19:21:19   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:20 104-0: 19:21:20   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:21 104-0: 19:21:21   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:22 104-0: 19:21:22   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:23 104-0: 19:21:23   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:24 104-0: 19:21:24   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:25 104-0: 19:21:25   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:26 104-0: 19:21:26   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:27 104-0: 19:21:27   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:28 104-0: 19:21:28   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:29 104-0: 19:21:29   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:30 104-0: 19:21:30   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:31 104-0: 19:21:31   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:32 104-0: 19:21:32   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:33 104-0: 19:21:33   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:35 104-0: 19:21:34   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:36 104-0: 19:21:36   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:37 104-0: 19:21:37   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:38 104-0: 19:21:38   5.97G   stockage_zfs/vm-104-disk-2@__replicate_104-0_1529395693__
2018-06-19 19:21:49 104-0: delete previous replication snapshot '__replicate_104-0_1529292179__' on stockage_zfs:vm-104-disk-1
2018-06-19 19:21:49 104-0: delete previous replication snapshot '__replicate_104-0_1529292179__' on stockage_zfs:vm-104-disk-2
2018-06-19 19:21:52 104-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_104-0_1529292179__' on stockage_zfs:vm-104-disk-1
2018-06-19 19:21:52 104-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_104-0_1529292179__' on stockage_zfs:vm-104-disk-2
2018-06-19 19:21:52 104-0: end replication job
 
Hi,

I would guess your network is overloaded.
What network do you use and is it under your control?
 
The network isn't overloaded.
In the example above, replication transfers 6GB of data.
Normally it should only transfer new data? (about 200Mo for this VM ...)

EDIT : there is weird lines in daemon.log:
Code:
Jun 20 09:59:49 proxmox corosync[16543]: notice  [TOTEM ] Retransmit List: 13f4
Jun 20 09:59:49 proxmox corosync[16543]:  [TOTEM ] Retransmit List: 13f4
Jun 20 09:59:49 proxmox corosync[16543]: notice  [TOTEM ] Retransmit List: 13f8
Jun 20 09:59:49 proxmox corosync[16543]:  [TOTEM ] Retransmit List: 13f8
Jun 20 09:59:51 proxmox corosync[16543]: notice  [TOTEM ] Retransmit List: 13fc
Jun 20 09:59:51 proxmox corosync[16543]:  [TOTEM ] Retransmit List: 13fc
Jun 20 09:59:57 proxmox corosync[16543]: notice  [TOTEM ] Retransmit List: 1411
Jun 20 09:59:57 proxmox corosync[16543]:  [TOTEM ] Retransmit List: 1411
Jun 20 09:59:59 proxmox corosync[16543]: notice  [TOTEM ] Retransmit List: 1417
Jun 20 09:59:59 proxmox corosync[16543]:  [TOTEM ] Retransmit List: 1417
Jun 20 10:00:07 proxmox corosync[16543]: notice  [TOTEM ] Retransmit List: 1436
Jun 20 10:00:07 proxmox corosync[16543]:  [TOTEM ] Retransmit List: 1436
Jun 20 10:00:07 proxmox corosync[16543]:  [TOTEM ] Retransmit List: 1438
Jun 20 10:00:07 proxmox corosync[16543]: notice  [TOTEM ] Retransmit List: 1438
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!