I have setup a 2 node cluster, and migration of a vm from one node to the other works fine, but trying to enable replication, I get the following error:
How can the raw flag be set? Or is this currently not supported?
I have all my vms on node1 on an encrypted datapool. I tried to remove the second node from the cluster and create an unencrypted pool and re-joined to the cluster, but that does not work either. So what are my options if I'd like to enable replication of the vm's to a second clusternode while keeping encrypted datapool?
While migrating some VMs from one node to another, I realize that some of the nodes migrate without problem, others fail for the same reason? Following the logs of a failed one. This vms are on the same datapool so I don't understand what the difference could be and why some are able to be migrated to the other node while others fail. On the other hand this makes me hope that it is no a matter of replication not being possible on encrypted datasets. Thanks in advance for any assistance.
Code:
2020-11-03 22:44:01 106-0: start replication job
2020-11-03 22:44:01 106-0: guest => VM 106, running => 19766
2020-11-03 22:44:01 106-0: volumes => name:vm-106-disk-0
2020-11-03 22:44:01 106-0: freeze guest filesystem
2020-11-03 22:44:01 106-0: create snapshot '__replicate_106-0_1604439841__' on name:vm-106-disk-0
2020-11-03 22:44:01 106-0: thaw guest filesystem
2020-11-03 22:44:01 106-0: using secure transmission, rate limit: none
2020-11-03 22:44:01 106-0: full sync 'name:vm-106-disk-0' (__replicate_106-0_1604439841__)
2020-11-03 22:44:02 106-0: cannot send pool/name/vm-106-disk-0@__replicate_106-0_1604439841__: encrypted dataset pool/names/vm-106-disk-0 may not be sent with properties without the raw flag
2020-11-03 22:44:02 106-0: command 'zfs send -Rpv -- pool/name/vm-106-disk-0@__replicate_106-0_1604439841__' failed: exit code 1
2020-11-03 22:44:02 106-0: cannot receive: failed to read from stream
2020-11-03 22:44:02 106-0: cannot open 'pool/name/vm-106-disk-0': dataset does not exist
2020-11-03 22:44:02 106-0: command 'zfs recv -F -- pool/name/vm-106-disk-0' failed: exit code 1
2020-11-03 22:44:02 106-0: delete previous replication snapshot '__replicate_106-0_1604439841__' on name:vm-106-disk-0
2020-11-03 22:44:02 106-0: end replication job with error: command 'set -o pipefail && pvesm export name:vm-106-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_106-0_1604439841__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=nodename' root@ip.ad.dr.ess -- pvesm import name:vm-106-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1
I have all my vms on node1 on an encrypted datapool. I tried to remove the second node from the cluster and create an unencrypted pool and re-joined to the cluster, but that does not work either. So what are my options if I'd like to enable replication of the vm's to a second clusternode while keeping encrypted datapool?
While migrating some VMs from one node to another, I realize that some of the nodes migrate without problem, others fail for the same reason? Following the logs of a failed one. This vms are on the same datapool so I don't understand what the difference could be and why some are able to be migrated to the other node while others fail. On the other hand this makes me hope that it is no a matter of replication not being possible on encrypted datasets. Thanks in advance for any assistance.
Code:
2020-11-04 00:14:04 starting migration of VM ID to node 'nodename' (192.168.x.y)
2020-11-04 00:14:05 found local, replicated disk 'name:vm-106-disk-0' (in current VM config)
2020-11-04 00:14:05 scsi0: start tracking writes using block-dirty-bitmap 'repl_scsi0'
2020-11-04 00:14:05 replicating disk images
2020-11-04 00:14:05 start replication job
2020-11-04 00:14:05 guest => VM 106, running => 19766
2020-11-04 00:14:05 volumes => name:vm-106-disk-0
2020-11-04 00:14:06 freeze guest filesystem
2020-11-04 00:14:06 create snapshot '__replicate_106-0_1604445245__' on name:vm-106-disk-0
2020-11-04 00:14:06 thaw guest filesystem
2020-11-04 00:14:06 using secure transmission, rate limit: none
2020-11-04 00:14:06 full sync 'name:vm-106-disk-0' (__replicate_106-0_1604445245__)
2020-11-04 00:14:07 cannot send pool/name/vm-106-disk-0@__replicate_106-0_1604445245__: encrypted dataset pool/name/vm-106-disk-0 may not be sent with properties without the raw flag
2020-11-04 00:14:07 command 'zfs send -Rpv -- pool/name/vm-106-disk-0@__replicate_106-0_1604445245__' failed: exit code 1
2020-11-04 00:14:07 cannot receive: failed to read from stream
2020-11-04 00:14:07 cannot open 'pool/name/vm-106-disk-0': dataset does not exist
2020-11-04 00:14:07 command 'zfs recv -F -- pool/name/vm-106-disk-0' failed: exit code 1
send/receive failed, cleaning up snapshot(s)..
2020-11-04 00:14:07 delete previous replication snapshot '__replicate_106-0_1604445245__' on name:vm-106-disk-0
2020-11-04 00:14:07 end replication job with error: command 'set -o pipefail && pvesm export name:vm-106-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_106-0_1604445245__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=nodename' root@192.168.x.y -- pvesm import name:vm-106-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1
2020-11-04 00:14:07 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export name:vm-106-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_106-0_1604445245__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=nodename' root@192.168.x.y -- pvesm import name:vm-106-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1
2020-11-04 00:14:07 aborting phase 1 - cleanup resources
2020-11-04 00:14:07 scsi0: removing block-dirty-bitmap 'repl_scsi0'
2020-11-04 00:14:07 ERROR: migration aborted (duration 00:00:03): Failed to sync data - command 'set -o pipefail && pvesm export name:vm-106-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_106-0_1604445245__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=nodename' root@192.168.x.y -- pvesm import name:vm-106-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1
TASK ERROR: migration aborted
Last edited: