Replication - Dataset does not exist

XaosBr

New Member
Nov 11, 2022
4
0
1
I'm getting this error, could anyone help me?

Cabeçalho
Proxmox
Virtual Environment 7.2-3
Procurar
Virtual Machine 100 (CTS) on node 'pve02'
Visão do Servidor
Logs
()
2022-11-11 15:17:04 100-0: start replication job
2022-11-11 15:17:04 100-0: guest => VM 100, running => 1603
2022-11-11 15:17:04 100-0: volumes => Storage01:vm-100-disk-0
2022-11-11 15:17:05 100-0: create snapshot '_replicate_100-0_1668194224_' on Storage01:vm-100-disk-0
2022-11-11 15:17:05 100-0: using secure transmission, rate limit: none
2022-11-11 15:17:05 100-0: full sync 'Storage01:vm-100-disk-0' (_replicate_100-0_1668194224_)
2022-11-11 15:17:05 100-0: full send of Storage01/vm-100-disk-0@_replicate_100-0_1668194224_ estimated size is 144G
2022-11-11 15:17:05 100-0: total estimated size is 144G
2022-11-11 15:17:06 100-0: TIME SENT SNAPSHOT Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:06 100-0: 15:17:06 101M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:07 100-0: 15:17:07 213M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:08 100-0: 15:17:08 325M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:09 100-0: 15:17:09 437M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:10 100-0: 15:17:10 549M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:11 100-0: 15:17:11 661M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:12 100-0: 15:17:12 772M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:13 100-0: 15:17:13 884M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:14 100-0: 15:17:14 996M Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:15 100-0: 15:17:15 1.08G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:16 100-0: 15:17:16 1.19G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:17 100-0: 15:17:17 1.30G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:18 100-0: 15:17:18 1.41G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:19 100-0: 15:17:19 1.52G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:20 100-0: 15:17:20 1.63G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:21 100-0: 15:17:21 1.74G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:22 100-0: 15:17:22 1.85G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:23 100-0: 15:17:23 1.96G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:24 100-0: 15:17:24 2.07G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:25 100-0: 15:17:25 2.17G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:26 100-0: 15:17:26 2.28G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:27 100-0: 15:17:27 2.39G Storage01/vm-100-disk-0@_replicate_100-0_1668194224_
2022-11-11 15:17:28 100-0: warning: cannot send 'Storage01/vm-100-disk-0@_replicate_100-0_1668194224_': Input/output error
2022-11-11 15:17:29 100-0: cannot receive new filesystem stream: checksum mismatch
2022-11-11 15:17:29 100-0: cannot open 'Storage01/vm-100-disk-0': dataset does not exist
2022-11-11 15:17:29 100-0: command 'zfs recv -F -- Storage01/vm-100-disk-0' failed: exit code 1
2022-11-11 15:17:29 100-0: delete previous replication snapshot '_replicate_100-0_1668194224_' on Storage01:vm-100-disk-0
2022-11-11 15:17:29 100-0: end replication job with error: command 'set -o pipefail && pvesm export Storage01:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot _replicate_100-0_1668194224_ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve01' root@192.168.1.120 -- pvesm import Storage01:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot _replicate_100-0_1668194224_ -allow-rename 0' failed: exit code 1
 
Hi,
please post the output of pveversion -v (both source and target). The log mentions an I/O error. Does that error always happen at the same point? What is logged in /var/log/syslog (both source and target) around the time the issue occurs?
 
Source:
root@pve02:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-helper: 7.2-2
pve-kernel-5.15: 7.2-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-6
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.2-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.1.8-1
proxmox-backup-file-restore: 2.1.8-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-10
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-1
pve-ha-manager: 3.3-4
pve-i18n: 2.7-1
pve-qemu-kvm: 6.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

Target:
oot@pve01:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-helper: 7.2-2
pve-kernel-5.15: 7.2-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-6
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.2-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.1.8-1
proxmox-backup-file-restore: 2.1.8-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-10
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-1
pve-ha-manager: 3.3-4
pve-i18n: 2.7-1
pve-qemu-kvm: 6.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

/var/log/syslog:
Nov 18 15:28:40 pve02 pvescheduler[2102210]: send/receive failed, cleaning up snapshot(s)..
Nov 18 15:28:40 pve02 pvescheduler[2102210]: 100-0: got unexpected replication job error - command 'set -o pipefail && pvesm export Storage01:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1668799681__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve01' root@192.168.1.120 -- pvesm import Storage01:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1668799681__ -allow-rename 0' failed: exit code 1
 
root@pve02:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
oot@pve01:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)

I do not know, if it could potentially fix your problem, but in general and to rule it out: Both nodes were never updated; see here: [1] how to.

Do you have this problem only with this particular VM and/or storage or also with others?

[1] https://forum.proxmox.com/threads/im-unable-to-upload-files-to-my-proxmox-server.114541/#post-495356
 
I do not know, if it could potentially fix your problem, but in general and to rule it out: Both nodes were never updated; see here: [1] how to.

Do you have this problem only with this particular VM and/or storage or also with others?

[1] https://forum.proxmox.com/threads/im-unable-to-upload-files-to-my-proxmox-server.114541/#post-495356
I did some tests:

I created a new vm, and enabled replication, success.
after I installed the operating system it gave an error.