Proxmox 5.1 Configure ZFS Replication

Gent

Active Member
Nov 30, 2017
3
0
41
57
I have used the instructions (https://pve.proxmox.com/wiki/Cluster_Manager) to create a cluster:

hp1# pvecm create YOUR-CLUSTER-NAME
hp2# pvecm add IP-ADDRESS-CLUSTER

The cluster seems to be working as I see the other nodes:

replication1.jpg

However, when I setup the replication for a CT, it fails (see below). Anybody have an idea what I might be doing wrong?

2017-11-29 22:52:00 19101-0: start replication job
2017-11-29 22:52:00 19101-0: guest => CT 19101, running => 0
2017-11-29 22:52:00 19101-0: volumes => clusterpool:subvol-19101-disk-1
2017-11-29 22:52:01 19101-0: create snapshot '__replicate_19101-0_1512013920__' on clusterpool:subvol-19101-disk-1
2017-11-29 22:52:02 19101-0: full sync 'clusterpool:subvol-19101-disk-1' (__replicate_19101-0_1512013920__)
2017-11-29 22:52:02 19101-0: internal error: Invalid argument

2017-11-29 22:52:02 19101-0: command 'zfs send -Rpv -- rpool/subvol-19101-disk-1@__replicate_19101-0_1512013920__' failed: got signal 6
2017-11-29 22:52:02 19101-0: cannot receive: failed to read from stream
2017-11-29 22:52:02 19101-0: cannot open 'rpool/subvol-19101-disk-1': dataset does not exist
2017-11-29 22:52:02 19101-0: command 'zfs recv -F -- rpool/subvol-19101-disk-1' failed: exit code 1
2017-11-29 22:52:02 19101-0: delete previous replication snapshot '__replicate_19101-0_1512013920__' on clusterpool:subvol-19101-disk-1
2017-11-29 22:52:02 19101-0: end replication job with error: command 'set -o pipefail && pvesm export clusterpool:subvol-19101-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_19101-0_1512013920__ | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=vm192' root@10.99.0.192 -- pvesm import clusterpool:subvol-19101-disk-1 zfs - -with-snapshots 1' failed: exit code 1



pvecm status:
Quorum information
------------------
Date: Wed Nov 29 22:47:45 2017
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000001
Ring ID: 1/12
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.99.0.191 (local)
0x00000002 1 10.99.0.192
0x00000003 1 10.99.0.193


cat /etc/pve/storage.cfg:
dir: local
path /var/lib/vz
content backup,iso,vztmpl

zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1

zfspool: clusterpool
pool rpool
content rootdir,images
nodes vm192,vm193,vm191
sparse 1


pveversion:
pve-manager/5.1-36/131401db (running kernel: 4.10.15-1-pve)

 
Hi,

this kernel do not work with your zfsutils version.
Kernel 4.10.15-1 has zfs 0.6.5 modules.
But PVE 5.1 use zfsutils 0.7.3 so you have to upgrade your kernel.
 
Thanks for the direction. I just did that and now replication works.

pveversion:
pve-manager/5.1-36/131401db (running kernel: 4.13.8-1-pve)

Hints for others:
Please note that my initial install was 5.0.
ADD: deb http://download.proxmox.com/debian stretch pve-no-subscription into /etc/apt/sources.list
REMOVE: rm -f /etc/apt/sources.list.d/pve-enterprise.list
- apt-get update
- apt-get dist-upgrade
- Reboot the system in order to use the new PVE kernel

https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!