[SOLVED] Replication error. Broken pipe on 2nd send

digidax

Renowned Member
Mar 23, 2009
99
1
73
Replication of container works:
from pve4 to pve1 and pve2
to pve3 not.

The log:
Code:
2021-04-15 09:50:00 211-2: start replication job
2021-04-15 09:50:00 211-2: guest => CT 211, running => 0
2021-04-15 09:50:00 211-2: volumes => pve_zfs:subvol-211-disk-0,pve_zfs:subvol-211-disk-1
2021-04-15 09:50:01 211-2: (remote_prepare_local_job) storage does not support content type 'none'
2021-04-15 09:50:01 211-2: create snapshot '__replicate_211-2_1618473000__' on pve_zfs:subvol-211-disk-0
2021-04-15 09:50:01 211-2: create snapshot '__replicate_211-2_1618473000__' on pve_zfs:subvol-211-disk-1
2021-04-15 09:50:01 211-2: using insecure transmission, rate limit: none
2021-04-15 09:50:01 211-2: full sync 'pve_zfs:subvol-211-disk-0' (__replicate_211-2_1618473000__)
2021-04-15 09:50:02 211-2: storage does not support content type 'none'
2021-04-15 09:50:02 211-2: full send of rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__ estimated size is 4.63G
2021-04-15 09:50:02 211-2: send from @__replicate_211-1_1618470001__ to rpool/data/subvol-211-disk-0@__replicate_211-0_1618470013__ estimated size is 624B
2021-04-15 09:50:02 211-2: send from @__replicate_211-0_1618470013__ to rpool/data/subvol-211-disk-0@__replicate_211-2_1618473000__ estimated size is 624B
2021-04-15 09:50:02 211-2: total estimated size is 4.63G
2021-04-15 09:50:03 211-2: TIME        SENT   SNAPSHOT rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:04 211-2: 09:50:04   29.3M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:05 211-2: 09:50:05   95.5M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:06 211-2: 09:50:06    136M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:07 211-2: 09:50:07    176M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:08 211-2: 09:50:08    187M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:09 211-2: 09:50:09    209M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:10 211-2: 09:50:10    263M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:11 211-2: 09:50:11    314M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:12 211-2: 09:50:12    331M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:13 211-2: 09:50:13    387M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:14 211-2: 09:50:14    500M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:15 211-2: 09:50:15    584M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:16 211-2: 09:50:16    653M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:17 211-2: 09:50:17    700M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:18 211-2: 09:50:18    742M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:19 211-2: 09:50:19    806M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:20 211-2: 09:50:20    893M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:21 211-2: 09:50:21   1005M   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:22 211-2: 09:50:22   1.07G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:23 211-2: 09:50:23   1.18G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:24 211-2: 09:50:24   1.29G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:25 211-2: 09:50:25   1.40G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:26 211-2: 09:50:26   1.51G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:27 211-2: 09:50:27   1.62G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:28 211-2: 09:50:28   1.72G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:29 211-2: 09:50:29   1.84G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:30 211-2: 09:50:30   1.94G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:31 211-2: 09:50:31   2.05G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:32 211-2: 09:50:32   2.16G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:33 211-2: 09:50:33   2.27G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:34 211-2: 09:50:34   2.38G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:35 211-2: 09:50:35   2.49G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:36 211-2: 09:50:36   2.60G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:37 211-2: 09:50:37   2.71G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:38 211-2: 09:50:38   2.82G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:39 211-2: 09:50:39   2.93G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:40 211-2: 09:50:40   3.04G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:41 211-2: 09:50:41   3.15G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:42 211-2: 09:50:42   3.26G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:43 211-2: 09:50:43   3.37G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:44 211-2: 09:50:44   3.47G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:45 211-2: 09:50:45   3.58G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:46 211-2: 09:50:46   3.69G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:47 211-2: 09:50:47   3.80G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:48 211-2: 09:50:48   3.91G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:49 211-2: 09:50:49   4.02G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:50 211-2: 09:50:50   4.13G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:51 211-2: 09:50:51   4.24G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:52 211-2: 09:50:52   4.35G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:53 211-2: 09:50:53   4.46G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:54 211-2: 09:50:54   4.56G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:55 211-2: 09:50:55   4.67G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:56 211-2: 09:50:56   4.70G   rpool/data/subvol-211-disk-0@__replicate_211-1_1618470001__
2021-04-15 09:50:56 211-2: TIME        SENT   SNAPSHOT rpool/data/subvol-211-disk-0@__replicate_211-0_1618470013__
2021-04-15 09:50:56 211-2: TIME        SENT   SNAPSHOT rpool/data/subvol-211-disk-0@__replicate_211-2_1618473000__
2021-04-15 09:51:00 211-2: [pve3] successfully imported 'pve_zfs:subvol-211-disk-0'
2021-04-15 09:51:00 211-2: full sync 'pve_zfs:subvol-211-disk-1' (__replicate_211-2_1618473000__)
2021-04-15 09:51:01 211-2: full send of rpool/data/subvol-211-disk-1@__replicate_211-1_1618470001__ estimated size is 1.31G
2021-04-15 09:51:01 211-2: send from @__replicate_211-1_1618470001__ to rpool/data/subvol-211-disk-1@__replicate_211-0_1618470013__ estimated size is 624B
2021-04-15 09:51:01 211-2: send from @__replicate_211-0_1618470013__ to rpool/data/subvol-211-disk-1@__replicate_211-2_1618473000__ estimated size is 624B
2021-04-15 09:51:01 211-2: total estimated size is 1.31G
2021-04-15 09:51:02 211-2: warning: cannot send 'rpool/data/subvol-211-disk-1@__replicate_211-1_1618470001__': Broken pipe
2021-04-15 09:51:02 211-2: warning: cannot send 'rpool/data/subvol-211-disk-1@__replicate_211-0_1618470013__': Broken pipe
2021-04-15 09:51:02 211-2: warning: cannot send 'rpool/data/subvol-211-disk-1@__replicate_211-2_1618473000__': Broken pipe
2021-04-15 09:51:02 211-2: cannot send 'rpool/data/subvol-211-disk-1': I/O error
2021-04-15 09:51:02 211-2: command 'zfs send -Rpv -- rpool/data/subvol-211-disk-1@__replicate_211-2_1618473000__' failed: exit code 1
2021-04-15 09:51:02 211-2: [pve3] storage does not support content type 'none'
2021-04-15 09:51:02 211-2: [pve3] volume 'rpool/data/subvol-211-disk-1' already exists
2021-04-15 09:51:02 211-2: delete previous replication snapshot '__replicate_211-2_1618473000__' on pve_zfs:subvol-211-disk-0
2021-04-15 09:51:02 211-2: delete previous replication snapshot '__replicate_211-2_1618473000__' on pve_zfs:subvol-211-disk-1
2021-04-15 09:51:02 211-2: end replication job with error: command 'set -o pipefail && pvesm export pve_zfs:subvol-211-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_211-2_1618473000__' failed: exit code 1

SSH passwordless login is working fine from pve4 to pve3.
As I do read the log, the first sync was working fine but the following 2nd not.
On PVE3 /rpool/data/subvol-211-disk-0 was created

proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve) pve-manager: 6.3-6 (running version: 6.3-6/2184247e) pve-kernel-5.4: 6.3-8 pve-kernel-helper: 6.3-8 pve-kernel-5.4.106-1-pve: 5.4.106-1 pve-kernel-5.4.73-1-pve: 5.4.73-1 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.1.0-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.20-pve1 libproxmox-acme-perl: 1.0.8 libproxmox-backup-qemu0: 1.0.3-1 libpve-access-control: 6.1-3 libpve-apiclient-perl: 3.1-3 libpve-common-perl: 6.3-5 libpve-guest-common-perl: 3.1-5 libpve-http-server-perl: 3.1-1 libpve-storage-perl: 6.3-8 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.6-2 lxcfs: 4.0.6-pve1 novnc-pve: 1.1.0-1 proxmox-backup-client: 1.0.13-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.4-9 pve-cluster: 6.2-1 pve-container: 3.3-4 pve-docs: 6.3-1 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-3 pve-firmware: 3.2-2 pve-ha-manager: 3.1-1 pve-i18n: 2.3-1 pve-qemu-kvm: 5.2.0-5 pve-xtermjs: 4.7.0-3 qemu-server: 6.3-10 smartmontools: 7.2-pve2 spiceterm: 3.1-1 vncterm: 1.6-2 zfsutils-linux: 2.0.4-pve1
 
no, the second disk fails because the target node already has such a volume ("2021-04-15 09:51:02 211-2: [pve3] volume 'rpool/data/subvol-211-disk-1' already exists"), but they don't have any snapshots in common (as indicated by the "2021-04-15 09:51:00 211-2: full sync 'pve_zfs:subvol-211-disk-1' (__replicate_211-2_1618473000__)").

delete that volume on the target node after making sure it's not used by anything, then the sync should work.
 
Thanks fabian, but here is on the target node (pve3) not such volume:
Code:
root@pve3:~# ls -l /rpool/data/

total 59

drwxr-xr-x 18 root root 23 Dec 22 07:54 subvol-172-disk-1

drwxr-xr-x 18 root root 23 Feb  1 17:47 subvol-183-disk-0

drwxr-xr-x 18 root root 23 Feb  1 17:47 subvol-184-disk-1

drwxr-xr-x 18 root root 26 Mar 29 07:40 subvol-191-disk-1

drwxr-xr-x 19 root root 24 Apr 15 06:49 subvol-211-disk-0

no rpool/data/subvol-211-disk-1 there.
 
please check with 'zfs list'. it is also possible that subsequent replication runs have cleaned it up already.
 
Thanks, you're right (last line):
Code:
zfs list
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool                         39.3G   410G      104K  /rpool
rpool/ROOT                    11.6G   410G       96K  /rpool/ROOT
rpool/ROOT/pve-1              11.6G   410G     11.6G  /
rpool/data                    27.5G   410G      128K  /rpool/data
rpool/data/subvol-172-disk-1  1.46G  98.5G     1.46G  /rpool/data/subvol-172-disk-1
rpool/data/subvol-183-disk-0  21.2G  28.8G     21.2G  /rpool/data/subvol-183-disk-0
rpool/data/subvol-184-disk-1  1002M  49.0G     1001M  /rpool/data/subvol-184-disk-1
rpool/data/subvol-191-disk-1  1.76G  48.3G     1.75G  /rpool/data/subvol-191-disk-1
rpool/data/subvol-211-disk-0  2.16G  47.9G     2.15G  /rpool/data/subvol-211-disk-0
rpool/data/subvol-211-disk-1    96K   410G       96K  /rpool/data/subvol-211-disk-1

Have remove the rest of it with

Code:
#zfs destroy rpool/data/subvol-211-disk-1

Replication now working fine, problem solved.

Thanks, Frank
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!