ZFS pool and encryption

I have created two datasets, one 'edata' (for encrypted data) and one 'edata-proxmox' (intended for vm images, containers, and disks) due to the notes above given regarding zvol blockdevices, but from a dataset creation point of view on the host there is nothing different between the two datasets. What is against or why shouldn't I put the vm images, containers and disks not within a folder in the 'edata' dataset, for example edata/proxmox in opposite to in a dedicated dataset like 'edata-proxmox'?
 
Hi,

I have created two datasets, one 'edata' (for encrypted data) and one 'edata-proxmox' (intended for vm images, containers, and disks) due to the notes above given regarding zvol blockdevices, but from a dataset creation point of view on the host there is nothing different between the two datasets. What is against or why shouldn't I put the vm images, containers and disks not within a folder in the 'edata' dataset, for example edata/proxmox in opposite to in a dedicated dataset like 'edata-proxmox'?

there is nothing wrong with having a ZFS storage in PVE to be a sub-dataset like edata/proxmox. Just specify that path as the pool.

If you only create a directory based storage then you won't be able to use the integrated ZFS features. PVE expects to work with a dataset, because ZFS management happens at the dataset-level as opposed to the folder-level (e.g. all the dataset properties, snapshots, cloning, etc.).
Note that
Code:
mkdir /myzpool/newdir
zfs create myzpool/newdataset
are very different. You won't be able to do ZFS operations on /myzpool/newdir, because it is only a folder.
 
  • Like
Reactions: jimi
Great understand! Does it also mean that I don't have to encrypt the sub dataset separately. Is it encrypted by it's parent dataset, i.e. in my case /edata/ ?

Hi,



there is nothing wrong with having a ZFS storage in PVE to be a sub-dataset like edata/proxmox. Just specify that path as the pool.

If you only create a directory based storage then you won't be able to use the integrated ZFS features. PVE expects to work with a dataset, because ZFS management happens at the dataset-level as opposed to the folder-level (e.g. all the dataset properties, snapshots, cloning, etc.).
Note that
Code:
mkdir /myzpool/newdir
zfs create myzpool/newdataset
are very different. You won't be able to do ZFS operations on /myzpool/newdir, because it is only a folder.
 
Great understand! Does it also mean that I don't have to encrypt the sub dataset separately. Is it encrypted by it's parent dataset, i.e. in my case /edata/ ?

Yes, after doing zfs create edata/proxmox you can check with
Code:
zfs get encryption,encryptionroot edata/proxmox
The encryptionroot is the dataset whose key needs to be loaded for edata/proxmox to be accessible and should be edata in your case.
 
I have succesfully created a dataset and a subdataset, i.e.:
- zpool/data/
- zpool/data/proxmox
But have a question whether the following I did works ok for ZFS:
1) I added zpool/data/proxmox as ZFS storage type in proxmox
2) But I also added it's mount path /zpool/data/proxmox as directory storage type in proxmox. Why? As location for templates and ISO's.

I notice that the disk of virtual machine I created which is stored on zpool/data/proxmox cannot be seen when I view the directory /zpool/data/proxmox (it's mount point as well) via the Shell. Is that normal behavior? The machine works fine further and the proxmox interface shows the disk is stored on zpool/data/proxmox.

Further i have a question in limiting ram memory of the host and zfs to 8 GB (I have 32 GB installed). By the manual i read that I can do the following:
"/etc/modprobe.d/zfs.conf and insert:
options zfs zfs_arc_max=8589934592"
Can this be already done with having an SSD assigned for 'caching' or does it require a SSD cache? I thought about the dependency because of the ARC term usage in the section about SSD caching and in the section about limiting RAM size.
 
I have succesfully created a dataset and a subdataset, i.e.:
- zpool/data/
- zpool/data/proxmox
But have a question whether the following I did works ok for ZFS:
1) I added zpool/data/proxmox as ZFS storage type in proxmox
2) But I also added it's mount path /zpool/data/proxmox as directory storage type in proxmox. Why? As location for templates and ISO's.
I notice that the disk of virtual machine I created which is stored on zpool/data/proxmox cannot be seen when I view the directory /zpool/data/proxmox (it's mount point as well) via the Shell. Is that normal behavior? The machine works fine further and the proxmox interface shows the disk is stored on zpool/data/proxmox.
Yes, this is because VM images are created as virtual block devices and not files. They are sub-datasets of zpool/data/proxmox. You can see them with zfs list -r zpool/data/proxmox and manage them through zfs just like other datasets, if you ever need fine-tuning.

Further i have a question in limiting ram memory of the host and zfs to 8 GB (I have 32 GB installed). By the manual i read that I can do the following:
"/etc/modprobe.d/zfs.conf and insert:
options zfs zfs_arc_max=8589934592"
Can this be already done with having an SSD assigned for 'caching' or does it require a SSD cache? I thought about the dependency because of the ARC term usage in the section about SSD caching and in the section about limiting RAM size.
I'm pretty sure this only affects RAM usage. If you have a configured a cache device for your pool, that's an L2ARC (level 2 ARC), see wikipedia for a good overview of ZFS caching.
 
Many thanks Fabian, I learned a lot about ZFS recently. Pretty cool stuff. Next step is the integrated NFS and SAMBA sharing build in with ZFS instead of separate NFS/ SAMBA. I assume the ZFS integrated file sharing options can be applied to datasets as well as folders? I only find Solaris related documentation on ZFS, is that good enough (https://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html)?

UPDATE I gave it a try by:
1) apt-get install nfs-common && nfs-kernel-server (on the proxmox host)
2) zfs set sharenfs=on zpool/data
3) On the server edited /etc/exports by adding
/zpool/data *(rw)
Just read that this isn't necessary.
4) On the client edited /etc/fstab by adding
192.168.178.5:/zpool/data /mnt/zfs-data nfs auto 0 0

But I can't see files on the client created on the host and neither files on the host created by the client...
 
Last edited:
Many thanks Fabian, I learned a lot about ZFS recently. Pretty cool stuff. Next step is the integrated NFS and SAMBA sharing build in with ZFS instead of separate NFS/ SAMBA. I assume the ZFS integrated file sharing options can be applied to datasets as well as folders? I only find Solaris related documentation on ZFS, is that good enough (https://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html)?

UPDATE I gave it a try by:
1) apt-get install nfs-common && nfs-kernel-server (on the proxmox host)
2) zfs set sharenfs=on zpool/data
3) On the server edited /etc/exports by adding
/zpool/data *(rw)
Just read that this isn't necessary.
4) On the client edited /etc/fstab by adding
192.168.178.5:/zpool/data /mnt/zfs-data nfs auto 0 0

But I can't see files on the client created on the host and neither files on the host created by the client...

Is the filesystem mounted on both sides? Please check with findmnt /zpool/data and findmnt /mnt/zfs-data respectively. Adding an entry to /etc/fstab does not automatically mount the file system (that is, until the next reboot).
 
Is the filesystem mounted on both sides? Please check with findmnt /zpool/data and findmnt /mnt/zfs-data respectively. Adding an entry to /etc/fstab does not automatically mount the file system (that is, until the next reboot).
findmnt /zpool/data gives:
/zpool/data zpool/data zfs rw,xattr,noacl

findmnt /mnt/zfs-data, gives:
nothing

I alway reboot the system after adjusting /etc/fstab, so that's should be fine.
 
findmnt /zpool/data gives:
/zpool/data zpool/data zfs rw,xattr,noacl

findmnt /mnt/zfs-data, gives:
nothing

I alway reboot the system after adjusting /etc/fstab, so that's should be fine.

Seems like on the client the file system wasn't mounted. You can also use mount -a to mount the entries in /etc/fstab (avoids the need to reboot).
 
findmnt /zpool/data gives:
/zpool/data zpool/data zfs rw,xattr,noacl

findmnt /mnt/zfs-data, gives:
nothing

I alway reboot the system after adjusting /etc/fstab, so that's should be fine.
Seems like on the client the file system wasn't mounted. You can also use mount -a to mount the entries in /etc/fstab (avoids the need to reboot).
mount -a gives:
mount.nfs: Operations not permitted

The client is an Ubuntu 18.04 server container. I know I can work with a local bind mount in this case for the container, but in the end NFS should work as well for VM's. Or can the issue be specifically related to the use of a container? Or does it need to be a privileged container?
 
Last edited:
Meanwhile figured out that a container should run in privileged mode to enable NFS and CIFS options for it. But is seems you have to label a container as 'privileged' during initial build, at least I am not able to change it afterwords via het web interface

Although in the end I want to get NFS and probably SMB running for this particular use case I moved on with using local bind mount.

So I added on the host for the container the following line to /etc/pve/lxc/101.conf :
mp0: /zpool/data/Nextcloud,mp=/mnt/nextcloud
 
Now I am completely lost. I had my ZFS pool nicely working and configured as I wanted to and now there is suddenly an error 'mount is not empty'. My setup is as follow:
- I have a raid-1 pool of 2 devices, which is called zpool.
- I have one dataset called 'data' with a sub dataset in it called 'proxmox'
- I have one encrypted dataset called 'edata' with an encrypted sub dataset in it called 'eproxmox'
- Proxmox uses the following storage locations on the ZFS:
1589109150869.png
/zpool/data/proxmox and /zpool/edata/eproxmox are added as 'directory type' as well as 'zfs type' in order to store disk images and containers (ZFS type) and templates (directory type).
- They normally should mount to /zpool/data, /zpool/edata, /zpool/data/proxmox, and /zpool/edata/eproxmox
- Regardless the non-encyrpted or encrypted datasets when mounting it says 'cannot mount /zpool/data': directory is not empty'.

Any help is greatly appreciated.
 
Now I am completely lost. I had my ZFS pool nicely working and configured as I wanted to and now there is suddenly an error 'mount is not empty'. My setup is as follow:
- I have a raid-1 pool of 2 devices, which is called zpool.
- I have one dataset called 'data' with a sub dataset in it called 'proxmox'
- I have one encrypted dataset called 'edata' with an encrypted sub dataset in it called 'eproxmox'
- Proxmox uses the following storage locations on the ZFS:
View attachment 17036
/zpool/data/proxmox and /zpool/edata/eproxmox are added as 'directory type' as well as 'zfs type' in order to store disk images and containers (ZFS type) and templates (directory type).
- They normally should mount to /zpool/data, /zpool/edata, /zpool/data/proxmox, and /zpool/edata/eproxmox
- Regardless the non-encyrpted or encrypted datasets when mounting it says 'cannot mount /zpool/data': directory is not empty'.

Any help is greatly appreciated.

UPDATE:
- I tried 'zpool export zpool'
- Removed the directory /zpool with 'rm -r /zpool
- Rebooted, result same problem. Directory created again, but datasets remain unmounted.
- Can the issue be my template directories as shown above? That somehow PVE creates these directory quicker on boot than that it mounts the zfs datasets? Would be strange by the way because the system was earlier rebooted and did not had this problem?

UPDATE 2:
"- Can the issue be my template directories as shown above? That somehow PVE creates these directory quicker on boot than that it mounts the zfs datasets? Would be strange by the way because the system was earlier rebooted and did not had this problem?"

It seems that this is indeed the issue. After disabling the ZFS directories in PVE which are directories on the ZFS dataset (see below) the mount went fine. Strange that the problem did not arose earlier. Is there a solution to this, because disabling these directories before rebooting is not a structural solution. Is it maybe the fact that I use zpool/data/proxmox as ZFS type storage location for disk images and containers and /zpool/data/proxomx as directory for templates. I am documenting the post a bit so I hope it helps others as well.

1589113242967.png

UPDATE 3:
This seems to only resolve the problem for the non-encrypted dataset, not for the encrypted dataset (still directory is not empty).
 
Last edited:
UPDATE:
- I tried 'zpool export zpool'
- Removed the directory /zpool with 'rm -r /zpool
- Rebooted, result same problem. Directory created again, but datasets remain unmounted.
- Can the issue be my template directories as shown above? That somehow PVE creates these directory quicker on boot than that it mounts the zfs datasets? Would be strange by the way because the system was earlier rebooted and did not had this problem?

UPDATE 2:
"- Can the issue be my template directories as shown above? That somehow PVE creates these directory quicker on boot than that it mounts the zfs datasets? Would be strange by the way because the system was earlier rebooted and did not had this problem?"

It seems that this is indeed the issue. After disabling the ZFS directories in PVE which are directories on the ZFS dataset (see below) the mount went fine. Strange that the problem did not arose earlier. Is there a solution to this, because disabling these directories before rebooting is not a structural solution. Is it maybe the fact that I use zpool/data/proxmox as ZFS type storage location for disk images and containers and /zpool/data/proxomx as directory for templates. I am documenting the post a bit so I hope it helps others as well.

View attachment 17038
Yes, mixing storages can lead to such edge cases. But in your case, there is a way to fix it: You can make PVE skip directory creation on storage activation by setting the mkdir option to 0. It's not in the GUI, so you'll need to use pvesm set <STORAGE> --mkdir 0.

UPDATE 3:
This seems to only resolve the problem for the non-encrypted dataset, not for the encrypted dataset (still directory is not empty).

That's strange. Which directories/files are present below the mount point for the encrypted dataset after it failed to mount?
 
Yes, mixing storages can lead to such edge cases. But in your case, there is a way to fix it: You can make PVE skip directory creation on storage activation by setting the mkdir option to 0. It's not in the GUI, so you'll need to use pvesm set <STORAGE> --mkdir 0.



That's strange. Which directories/files are present below the mount point for the encrypted dataset after it failed to mount?

/zpool/edata/exproxmox/ contains directories 'dump' and 'template'.

What would be the alternative opposite for mixing storages? I thought about the following but than still you have the same potential issue:
- Current situation:
Dataset: zpool/edata/
Dataset: zpool/edata/eproxmox (added to PVE as ZFS type) and mounts to /zpool/edata/eproxmox
Folder: /zpool/edata/eproxmox (added to PVE as directory type) for storing templates*

- Alternative situation:
* Replacing 'Folder: /zpool/edata/eproxmox (added to PVE as directory type) for storing templates' with a folder /zpool/edata/eproxmox-templates, but this can still result in the same conflict, i.e. /zpool/edata/eproxmox-templates to be created earlier than zpool/edata/ dataset. In my understanding basically in all situations where you put the template directory on a dataset can give this problem. Or do I overlook something?

Regarding your solution 'pvesm set <STORAGE> --mkdir 0' where does <STORAGE> refer to? To the PVE storage ID's of the directories for templates. So in case of multiple directories the command has to be repeated? Why does PVE at all try to create the directory again at boot when adding it to the PVE should be sufficient I would think? A last, which configuration file does the 'pvesm set ...' command modify?
 
/zpool/edata/exproxmox/ contains directories 'dump' and 'template'.

What would be the alternative opposite for mixing storages? I thought about the following but than still you have the same potential issue:
- Current situation:
Dataset: zpool/edata/
Dataset: zpool/edata/eproxmox (added to PVE as ZFS type) and mounts to /zpool/edata/eproxmox
Folder: /zpool/edata/eproxmox (added to PVE as directory type) for storing templates*

- Alternative situation:
* Replacing 'Folder: /zpool/edata/eproxmox (added to PVE as directory type) for storing templates' with a folder /zpool/edata/eproxmox-templates, but this can still result in the same conflict, i.e. /zpool/edata/eproxmox-templates to be created earlier than zpool/edata/ dataset. In my understanding basically in all situations where you put the template directory on a dataset can give this problem. Or do I overlook something?

Regarding your solution 'pvesm set <STORAGE> --mkdir 0' where does <STORAGE> refer to? To the PVE storage ID's of the directories for templates. So in case of multiple directories the command has to be repeated? Why does PVE at all try to create the directory again at boot when adding it to the PVE should be sufficient I would think? A last, which configuration file does the 'pvesm set ...' command modify?
The alternative would be to have the directory storage live on its own partition (then there is no "mixing"). I understand if you don't want that and it should work with the mkdir option.

<STORAGE> has to be the storage ID/name for the directory storages, i.e. zfs-eproxmox-templates and zfs-proxmox-templates. The configuration file is /etc/pve/storage.cfg, but please be careful with manual modifications.
 
Hello!

I didn't want to create a new topic, hope this is a good place for my question.

I'm trying to setup an encrypted zfs dataset. I'm following the wiki, and have some questions:
1. I've created a zfs raid10 pool: link1
2. I've checked the encryption feature of the pool, and created the encrypted_data dataset: link2
3. When i tried to add it to proxmox with pvesm, i've got an error: link3
4. I've tried to add it again, but this time i used /VMs/encrypted_data instead VMs/encrypted_data as i used in the previous step, and got an error, that the dataset is already defined: link4
5. I removed the dataset, and tried to add it to proxmox for the third time: link5
6. Than i tried to add it again as VMs/encrypted_data with the error: 400 Result verification failed; config: type check ('object') failed. So now /VMs/encrypted_data is exists on proxmox, tried to mount it (link6) with error: cannot mount 'VMs/encrypted_data': filesystem already mounted
7. After a reboot and a zfs mount, it seems that it is working, at least after the zfs mount it asks for the passphrase....

So my question: did i do anything wrong that i get that so many errors during setting this up? This is just virtual proxmox to play around things like this, but my future plan is to setup my proxmox host with zfs raid10 with encryption.

Thanks any help you can provide!
 
Last edited:
Hello!

I didn't want to create a new topic, hope this is a good place for my question.

I'm trying to setup an encrypted zfs dataset. I'm following the wiki, and have some questions:
1. I've created a zfs raid10 pool: link1
2. I've checked the encryption feature of the pool, and created the encrypted_data dataset: link2
3. When i tried to add it to proxmox with pvesm, i've got an error: link3
Yes, this was a cosmetic regression introduced in the initial PVE 6.3 release, where "cosmetic" means the storage was still created correctly. A fix is available in all package repositories since quite some time, please upgrade your packages.
4. I've tried to add it again, but this time i used /VMs/encrypted_data instead VMs/encrypted_data as i used in the previous step, and got an error, that the dataset is already defined: link4
5. I removed the dataset, and tried to add it to proxmox for the third time: link5
6. Than i tried to add it again as VMs/encrypted_data with the error: 400 Result verification failed; config: type check ('object') failed. So now /VMs/encrypted_data is exists on proxmox, tried to mount it (link6) with error: cannot mount 'VMs/encrypted_data': filesystem already mounted
7. After a reboot and a zfs mount, it seems that it is working, at least after the zfs mount it asks for the passphrase....
Sounds good. You can use zfs get mounted,mountpoint <dataset> to see if and where the dataset is mounted. What does cat /etc/pve/storage.cfg show? If there are duplicate entries left over from the "failed" attempts, it's best to remove those.
So my question: did i do anything wrong that i get that so many errors during setting this up? This is just virtual proxmox to play around things like this, but my future plan is to setup my proxmox host with zfs raid10 with encryption.

Thanks any help you can provide!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!