likely mypool/encrypted_data was not mounted when you wrote the file, and you instead wrote to the directory encrypted_data on the not-encrypted dataset mypool
I created an account just to say I had this problem and this reply gave me enough information to fix it, thanks. I went back and forth across 5 separate inconclusive messages (each with several steps to verify various information) with GPT-4 before I searched online and got this.
I had this exact same problem and I only realized it because I was testing a script I am working on to automatically use
zfs load-key
from another VM on the same network (under certain conditions--port 139 is open, the SMB share is not reachable, and the thumbprint matches). Originally, I was running
zfs load-key
, but never
zfs mount
manually, so the encrypted dataset never got mounted, and I was writing to an unencrypted path.
If you had this problem you should probably also be careful when fixing this.
If you don't know your current encryption key for 100% you should back up all your data before running unload-key for testing purposes.
If I had only ever loaded the key, instead of testing the behavior when unloaded for this hobby project, I would probably never have realized I had a misconfiguration. I likely would have kept remoting into the server to run
zfs load-key
once, never bothering to run zfs unload-key and see the results.
As a diagnostic, run
zfs get mounted
. If you see your encrypted dataset is not mounted, consider moving the unencrypted data to a different directory so you can mount the dataset (which may be empty) and migrate the data into the path you originally intended to use.
This is an illustrative example of the problem I also had:
Example output of zfs list -o name,mountpoint
NAME | MOUNTPOINT |
ABC | /ABC |
ABC/XYZ | /ABC/XYZ |
Example output of zfs get mounted
NAME | PROPERTY | VALUE | SOURCE |
ABC | mounted | yes | - |
ABC/XYZ | mounted | no | - |
rpool | mounted | yes | - |
Example file paths:
Code:
admin@examplehost:/$ ls
ABC bin dev [etc.]
admin@exmaplehost:/$ ls ABC/
XYZ
It certainly looks like the ZFS dataset ABC/XYZ is mounted, but that's just a directory I had made underneath the ABC dataset.
If you go ahead and run
sudo zfs mount -a
(caution) you will appear to have deleted all the data, by having the blank (or old) directory mounted at the overlapping path. This is scary but in my case was resolved by a reboot, your mileage may vary. The
mv
operation from unencrypted to unencrypted path for 10 TB of data completed in a second, but moving the unencrypted directory, mounting the encrypted dataset at the old path, and then moving the data in has taken just short of 24 hours at this point.