Hi all,
are they any limitation for the replication & migration of vm/ct stored in encrypted zfs pool?
i have the error bellow when i try to run the replication :
2020-05-30 22:09:03 101-0: start replication job
2020-05-30 22:09:03 101-0: guest => CT 101, running => 1
2020-05-30 22:09:03 101-0: volumes => encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:04 101-0: freeze guest filesystem
2020-05-30 22:09:04 101-0: create snapshot '__replicate_101-0_1590869343__' on encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:04 101-0: thaw guest filesystem
2020-05-30 22:09:04 101-0: using secure transmission, rate limit: none
2020-05-30 22:09:04 101-0: full sync 'encrypted_zfs:subvol-101-disk-0' (__replicate_101-0_1590869343__)
2020-05-30 22:09:05 101-0: cannot send data/enc_data1/subvol-101-disk-0@__replicate_101-0_1590869343__: encrypted dataset data/enc_data1/subvol-101-disk-0 may not be sent with properties without the raw flag
2020-05-30 22:09:05 101-0: command 'zfs send -Rpv -- data/enc_data1/subvol-101-disk-0@__replicate_101-0_1590869343__' failed: exit code 1
2020-05-30 22:09:05 101-0: cannot receive: failed to read from stream
2020-05-30 22:09:05 101-0: cannot open 'data/enc_data1/subvol-101-disk-0': dataset does not exist
2020-05-30 22:09:05 101-0: command 'zfs recv -F -- data/enc_data1/subvol-101-disk-0' failed: exit code 1
2020-05-30 22:09:05 101-0: delete previous replication snapshot '__replicate_101-0_1590869343__' on encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:05 101-0: end replication job with error: command 'set -o pipefail && pvesm export encrypted_zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_101-0_1590869343__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxtest3' root@10.102.32.123 -- pvesm import encrypted_zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1
my proxmox conf bellow:
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
thanks for you help.
are they any limitation for the replication & migration of vm/ct stored in encrypted zfs pool?
i have the error bellow when i try to run the replication :
2020-05-30 22:09:03 101-0: start replication job
2020-05-30 22:09:03 101-0: guest => CT 101, running => 1
2020-05-30 22:09:03 101-0: volumes => encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:04 101-0: freeze guest filesystem
2020-05-30 22:09:04 101-0: create snapshot '__replicate_101-0_1590869343__' on encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:04 101-0: thaw guest filesystem
2020-05-30 22:09:04 101-0: using secure transmission, rate limit: none
2020-05-30 22:09:04 101-0: full sync 'encrypted_zfs:subvol-101-disk-0' (__replicate_101-0_1590869343__)
2020-05-30 22:09:05 101-0: cannot send data/enc_data1/subvol-101-disk-0@__replicate_101-0_1590869343__: encrypted dataset data/enc_data1/subvol-101-disk-0 may not be sent with properties without the raw flag
2020-05-30 22:09:05 101-0: command 'zfs send -Rpv -- data/enc_data1/subvol-101-disk-0@__replicate_101-0_1590869343__' failed: exit code 1
2020-05-30 22:09:05 101-0: cannot receive: failed to read from stream
2020-05-30 22:09:05 101-0: cannot open 'data/enc_data1/subvol-101-disk-0': dataset does not exist
2020-05-30 22:09:05 101-0: command 'zfs recv -F -- data/enc_data1/subvol-101-disk-0' failed: exit code 1
2020-05-30 22:09:05 101-0: delete previous replication snapshot '__replicate_101-0_1590869343__' on encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:05 101-0: end replication job with error: command 'set -o pipefail && pvesm export encrypted_zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_101-0_1590869343__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxtest3' root@10.102.32.123 -- pvesm import encrypted_zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1
my proxmox conf bellow:
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
thanks for you help.