Replication error

MH_MUC

Active Member
May 24, 2019
66
6
28
37
Hi,

I have a cluter with two nodes.
Migrating and replication from node A -> node B works fine, but replication from B to A fails.

pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.4-11 (running version: 6.4-11/28d576c2)
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Linux server20 5.4.73-1-pve #1
pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-11 (running version: 6.4-11/28d576c2)
pve-kernel-helper: 7.0-3
pve-kernel-5.4: 6.4-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 1.1.10-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 6.4-1
pve-container: 4.0-7
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Linux server21 5.4.124-1-pve #1 SMP PVE 5.4.124-1

Fail LOG:
Code:
2021-07-07 16:35:00 101-0: start replication job
2021-07-07 16:35:00 101-0: guest => CT 101, running => 1
2021-07-07 16:35:00 101-0: volumes => local-zfs:subvol-101-disk-0
2021-07-07 16:35:01 101-0: freeze guest filesystem
2021-07-07 16:35:04 101-0: create snapshot '__replicate_101-0_1625668500__' on local-zfs:subvol-101-disk-0
2021-07-07 16:35:04 101-0: thaw guest filesystem
2021-07-07 16:35:04 101-0: using secure transmission, rate limit: 20000 MByte/s
2021-07-07 16:35:04 101-0: full sync 'local-zfs:subvol-101-disk-0' (__replicate_101-0_1625668500__)
2021-07-07 16:35:04 101-0: using a bandwidth limit of 20000000000 bps for transferring 'local-zfs:subvol-101-disk-0'
2021-07-07 16:35:05 101-0: full send of rpool/data/subvol-101-disk-0@__replicate_101-0_1625668500__ estimated size is 21.5G
2021-07-07 16:35:05 101-0: total estimated size is 21.5G
2021-07-07 16:35:06 101-0: Unknown option: snapshot
2021-07-07 16:35:06 101-0: 400 unable to parse option
2021-07-07 16:35:06 101-0: pvesm import <volume> <format> <filename> [OPTIONS]
2021-07-07 16:35:06 101-0: 2226852 B 2.1 MB 0.61 s 3628415 B/s 3.46 MB/s
2021-07-07 16:35:06 101-0: write: Broken pipe
2021-07-07 16:35:06 101-0: warning: cannot send 'rpool/data/subvol-101-disk-0@__replicate_101-0_1625668500__': signal received
2021-07-07 16:35:06 101-0: cannot send 'rpool/data/subvol-101-disk-0': I/O error
2021-07-07 16:35:06 101-0: command 'zfs send -Rpv -- rpool/data/subvol-101-disk-0@__replicate_101-0_1625668500__' failed: exit code 1
2021-07-07 16:35:06 101-0: delete previous replication snapshot '__replicate_101-0_1625668500__' on local-zfs:subvol-101-disk-0
2021-07-07 16:35:06 101-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_101-0_1625668500__ | /usr/bin/cstream -t 20000000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=server20' root@NODE-A-IP -- pvesm import local-zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_101-0_1625668500__ -allow-rename 0' failed: exit code 255

Any idea how to fix this?
 
Hi,
it seems like one of your nodes is partially upgraded to 7.0 for some reason
Code:
libpve-storage-perl: 6.4-1
....vs....
libpve-storage-perl: 7.0-9

There were changes in the replication API, so versions from 6.x and 7.x cannot talk to each other (EDIT: replication from 6.x to 7.x should still work, but not the other way around). Please check your APT repository configuration. Depending on how far the system is already upgraded it might be possible to downgrade the affected packages, but if it's more than a few, you might need to fully upgrade the cluster to 7.0 instead.
 
Last edited:
Hmm sorry. Silly mistake!
I accidentally picked the testing repo. I guess I will start over with a fresh install.
Updating to beta or any other unstable system state is not good to start with I guess.

Thank you so much for pointing out the mistake I made.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!