Hello,
With PVE 7, I have a LXC server with replication target with another PVE 7 node and the replication is not working well.
When schedule manually, the process is starting for ~3-4 min and stop working in the middle without anymore information regarding and error or failure.
Looking at the running process I see the replication not running (no more zfs send or received process on both node) and confirm the ZFS dataset is not completed imported either.
In the logs, I see incomplets logs. e.g.:
Like the process was terminated right in the middle.
I've try running the command line to send and receive the ZFS dataset manually and it's working without any error.
It's probably a bug or something. Any body experiences something similar ?
With PVE 7, I have a LXC server with replication target with another PVE 7 node and the replication is not working well.
When schedule manually, the process is starting for ~3-4 min and stop working in the middle without anymore information regarding and error or failure.
Looking at the running process I see the replication not running (no more zfs send or received process on both node) and confirm the ZFS dataset is not completed imported either.
In the logs, I see incomplets logs. e.g.:
Code:
2021-12-05 20:38:57 120-2: 20:38:57 4.30G rpool/data/subvol-120-disk-2@zfs-auto-snap_monthly-2021-01-01-1152
2021-12-05 20:38:58 120-2: 20:38:58 4.39G rpool/data/subvol-120-disk-2@zfs-auto-snap_monthly-2021-01-01-1152
2021-12-05 20:38:59 120-2: 20:38:59 4.50G rpool/data/subvol-120-disk-2@zfs-auto-snap_monthly-2021-01-01-1152
2021-12-05 20:39:00 120-2: 20:39:00 4.59G rpool/data/subvol-120-disk-2@zfs-auto-snap_monthly-2021-01-01-1152
2021-12-05 20:39:01 120-2: 20:39:01 4.68G rpool/data/subvol-120-disk-2@zfs-auto-snap_monthly-2021-01-01-1152
2021-12-05 20:39:02 120-2: 20:39:02 4.77G rpool/data/
Like the process was terminated right in the middle.
I've try running the command line to send and receive the ZFS dataset manually and it's working without any error.
Code:
pvesm export data:subvol-120-disk-2 zfs - -with-snapshots 1 -snapshot __replicate_120-2_1638754681__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=boxter' root@192.168.14.30 -- pvesm import data:subvol-120-disk-2 zfs - -with-snapshots 1 -snapshot __replicate_120-2_1638754681__ -allow-rename 0
It's probably a bug or something. Any body experiences something similar ?