PVE 5.1 does not not finish ZFS Replication

cpzengel

Active Member
Nov 12, 2015
210
10
38
Aschaffenburg, Germany
zfs.rocks
hi,

fresh installation does not complete any ssh replication.
first tested on pve 3.4 as a targen, then I updated and used a fresh created zpool as a target
after first replication the system does not finish the job
canceling the replication is working and freezes the target to busy the dataset and keeps the zfs recv running.
only rebooting target releases the dataset

formerly we had this from freenas 9.10 > pve4/5

a local replication from one tank to another is working!

this is extremely necessary to us, so please help!

action i took with an almost empty snapshot was

zfs send -pv rpool2/data/HIS-KASSEN@zfs-auto-snap_hourly-2017-10-31-1617 | ssh root@10.0.0.9 zfs recv -dvF Raid10




proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)

pve-manager: 5.1-35 (running version: 5.1-35/722cc488)

pve-kernel-4.13.4-1-pve: 4.13.4-25

libpve-http-server-perl: 2.0-6

lvm2: 2.02.168-pve6

corosync: 2.4.2-pve3

libqb0: 1.0.1-1

pve-cluster: 5.0-15

qemu-server: 5.0-17

pve-firmware: 2.0-3

libpve-common-perl: 5.0-20

libpve-guest-common-perl: 2.0-13

libpve-access-control: 5.0-7

libpve-storage-perl: 5.0-16

pve-libspice-server1: 0.12.8-3

vncterm: 1.5-2

pve-docs: 5.1-12

pve-qemu-kvm: 2.9.1-2

pve-container: 2.0-17

pve-firewall: 3.0-3

pve-ha-manager: 2.0-3

ksm-control-daemon: 1.2-2

glusterfs-client: 3.8.8-1

lxc-pve: 2.1.0-2

lxcfs: 2.0.7-pve4

criu: 2.11.1-1~bpo90

novnc-pve: 0.6-4

smartmontools: 6.5+svn4324-1

zfsutils-linux: 0.7.2-pve1~bpo90
 

cpzengel

Active Member
Nov 12, 2015
210
10
38
Aschaffenburg, Germany
zfs.rocks
so i tested it with a cluster. fresh install on both machines with

PVE Storage Replication via Web GUI

replication takes forever for 64kb dataset


()
Replication Log
2017-11-01 18:29:00 101-0: start replication job
2017-11-01 18:29:00 101-0: guest => VM 101, running => 0
2017-11-01 18:29:00 101-0: volumes => local-zfs:vm-101-disk-1
2017-11-01 18:29:01 101-0: create snapshot '__replicate_101-0_1509557340__' on local-zfs:vm-101-disk-1
2017-11-01 18:29:01 101-0: full sync 'local-zfs:vm-101-disk-1' (__replicate_101-0_1509557340__)

and not continuing

dataset showing up on remote side

rpool/data/vm-101-disk-1 64K 2.63T 64K -


receive processes do not finish

root 5895 0.3 0.4 294812 65692 ? Ss 18:29 0:00 /usr/bin/perl /usr/sbin/pvesm import local-zfs:vm-101-disk-1 zfs - -with-snapshots 1

root 5901 99.8 0.0 33600 3188 ? R 18:29 3:36 zfs recv -F -- rpool/data/vm-101-disk-1
 
Last edited:

guletz

Famous Member
Apr 19, 2017
1,581
255
103
Brasov, Romania
Last edited:

wolfgang

Proxmox Retired Staff
Retired Staff
Oct 1, 2014
6,496
496
103
To be sure you try manual send/receive from PVE 5.1(send) to PVE 5.1(receive)?
 

cpzengel

Active Member
Nov 12, 2015
210
10
38
Aschaffenburg, Germany
zfs.rocks
yes i am a zfs youtube podcaster and have tested as described above!

zfs send -pv rpool2/data/HIS-KASSEN@zfs-auto-snap_hourly-2017-10-31-1617 | ssh root@10.0.0.9 zfs recv -dvF Raid10

after transmiting the snapshot it takes forever and only can be interupted by ctrl+c and remote side crashing
 

wolfgang

Proxmox Retired Staff
Retired Staff
Oct 1, 2014
6,496
496
103
Is the target storage working proper.
I mean can you use it local?
 

wolfgang

Proxmox Retired Staff
Retired Staff
Oct 1, 2014
6,496
496
103
can you send me a "zpool status " and "zpool get all" to show me your pool setting form the target.
Also "zfs get all rpool2/data/HIS-KASSEN" would be helpful to see from the source.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!