[SOLVED] Replication & Migration encrypted zfs datasets

drdidji

Member
Apr 8, 2020
2
0
21
44
Hi all,

are they any limitation for the replication & migration of vm/ct stored in encrypted zfs pool?

i have the error bellow when i try to run the replication :

2020-05-30 22:09:03 101-0: start replication job
2020-05-30 22:09:03 101-0: guest => CT 101, running => 1
2020-05-30 22:09:03 101-0: volumes => encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:04 101-0: freeze guest filesystem
2020-05-30 22:09:04 101-0: create snapshot '__replicate_101-0_1590869343__' on encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:04 101-0: thaw guest filesystem
2020-05-30 22:09:04 101-0: using secure transmission, rate limit: none
2020-05-30 22:09:04 101-0: full sync 'encrypted_zfs:subvol-101-disk-0' (__replicate_101-0_1590869343__)
2020-05-30 22:09:05 101-0: cannot send data/enc_data1/subvol-101-disk-0@__replicate_101-0_1590869343__: encrypted dataset data/enc_data1/subvol-101-disk-0 may not be sent with properties without the raw flag
2020-05-30 22:09:05 101-0: command 'zfs send -Rpv -- data/enc_data1/subvol-101-disk-0@__replicate_101-0_1590869343__' failed: exit code 1
2020-05-30 22:09:05 101-0: cannot receive: failed to read from stream
2020-05-30 22:09:05 101-0: cannot open 'data/enc_data1/subvol-101-disk-0': dataset does not exist
2020-05-30 22:09:05 101-0: command 'zfs recv -F -- data/enc_data1/subvol-101-disk-0' failed: exit code 1
2020-05-30 22:09:05 101-0: delete previous replication snapshot '__replicate_101-0_1590869343__' on encrypted_zfs:subvol-101-disk-0
2020-05-30 22:09:05 101-0: end replication job with error: command 'set -o pipefail && pvesm export encrypted_zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_101-0_1590869343__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxtest3' root@10.102.32.123 -- pvesm import encrypted_zfs:subvol-101-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1


my proxmox conf bellow:

proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

thanks for you help.
 
Hi, I am building geocluster with proxmox around in shady data-centers and even the proxmox team told me it is not possible I think with minor modifications of corosync setting I made it (we are testing it now).
The problem is that we don't trust on data-centers security and privacy and we are using zfs crypto to prevent data leak if someone take our server.
And no we are facing the same problem with replication between zfs encrypted storages.
Plase fix this - I think it is very important feature for privacy !
 
Hi,
I ran into the same issue, thought my use case is a little bit different.
FWIW, here is how I plan to deal with it.

I use encrypted datasets for my backups to be secured by default.
As we are a small society, we don't need no HA and dedicated storage pool, but I like the idea to have a second node to synchronize data to and be ready to take over if the main node fails.

This second node is not in production yet, but I plan it do the following as a workaround :
- use sanoid/syncoid to sync the entire encrypted dataset from main to second node ,
- use the same key file for encryption on both nodes for this to work ,
- migrating (offline) a vm is then just copying the vm conf file from /etc/pve/nodes/{MAIN}/ to /etc/pve/nodes/{SECOND}/

drawbacks :
- sharing the encrypted key obviously lowers security ;
- replication and migration are not done within PVE, so it's not visible in the GUI ;
- once migrated, a vm backup to a dedicated zfs only backup server (i.e. not proxmox-backup) have to be adjusted (existing backed up dataset has to be destroyed and synced again) ;
- maybe something else I did not see ?

cheers,
lumi
 
Bumping this one to see if there has been any progress. This is a pretty big caveat that should be added to the wiki/documentation.

Is this a issue for pve-zsync as well?
 
Today we prepared our servers for storage replication. Everything went well until we found that it is impossible to manage pvesr with encrypted datasets. Nothing was written in the documentation...

We would appreciate a solution, as our DRP lays on pvesr.
 
Today we prepared our servers for storage replication. Everything went well until we found that it is impossible to manage pvesr with encrypted datasets. Nothing was written in the documentation...

We would appreciate a solution, as our DRP lays on pvesr.

I agree, this should be made very clear.

I am using Syncoid successfully and it works pretty well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!