Thank you very much, this works and does exactly what I wanted.Seeman 8 zfs-mount-generator
and follow the instructions there (especially the example).
Thank you very much, this works and does exactly what I wanted.Seeman 8 zfs-mount-generator
and follow the instructions there (especially the example).
I have created two datasets, one 'edata' (for encrypted data) and one 'edata-proxmox' (intended for vm images, containers, and disks) due to the notes above given regarding zvol blockdevices, but from a dataset creation point of view on the host there is nothing different between the two datasets. What is against or why shouldn't I put the vm images, containers and disks not within a folder in the 'edata' dataset, for example edata/proxmox in opposite to in a dedicated dataset like 'edata-proxmox'?
edata/proxmox
. Just specify that path as the pool
.mkdir /myzpool/newdir
zfs create myzpool/newdataset
/myzpool/newdir
, because it is only a folder.Hi,
there is nothing wrong with having a ZFS storage in PVE to be a sub-dataset likeedata/proxmox
. Just specify that path as thepool
.
If you only create a directory based storage then you won't be able to use the integrated ZFS features. PVE expects to work with a dataset, because ZFS management happens at the dataset-level as opposed to the folder-level (e.g. all the dataset properties, snapshots, cloning, etc.).
Note thatare very different. You won't be able to do ZFS operations onCode:mkdir /myzpool/newdir zfs create myzpool/newdataset
/myzpool/newdir
, because it is only a folder.
Great understand! Does it also mean that I don't have to encrypt the sub dataset separately. Is it encrypted by it's parent dataset, i.e. in my case /edata/ ?
zfs create edata/proxmox
you can check withzfs get encryption,encryptionroot edata/proxmox
encryptionroot
is the dataset whose key needs to be loaded for edata/proxmox
to be accessible and should be edata
in your case.I have succesfully created a dataset and a subdataset, i.e.:
- zpool/data/
- zpool/data/proxmox
But have a question whether the following I did works ok for ZFS:
1) I added zpool/data/proxmox as ZFS storage type in proxmox
2) But I also added it's mount path /zpool/data/proxmox as directory storage type in proxmox. Why? As location for templates and ISO's.
Yes, this is because VM images are created as virtual block devices and not files. They are sub-datasets ofI notice that the disk of virtual machine I created which is stored on zpool/data/proxmox cannot be seen when I view the directory /zpool/data/proxmox (it's mount point as well) via the Shell. Is that normal behavior? The machine works fine further and the proxmox interface shows the disk is stored on zpool/data/proxmox.
zpool/data/proxmox
. You can see them with zfs list -r zpool/data/proxmox
and manage them through zfs
just like other datasets, if you ever need fine-tuning.I'm pretty sure this only affects RAM usage. If you have a configured a cache device for your pool, that's an L2ARC (level 2 ARC), see wikipedia for a good overview of ZFS caching.Further i have a question in limiting ram memory of the host and zfs to 8 GB (I have 32 GB installed). By the manual i read that I can do the following:
"/etc/modprobe.d/zfs.conf and insert:
options zfs zfs_arc_max=8589934592"
Can this be already done with having an SSD assigned for 'caching' or does it require a SSD cache? I thought about the dependency because of the ARC term usage in the section about SSD caching and in the section about limiting RAM size.
Many thanks Fabian, I learned a lot about ZFS recently. Pretty cool stuff. Next step is the integrated NFS and SAMBA sharing build in with ZFS instead of separate NFS/ SAMBA. I assume the ZFS integrated file sharing options can be applied to datasets as well as folders? I only find Solaris related documentation on ZFS, is that good enough (https://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html)?
UPDATE I gave it a try by:
1) apt-get install nfs-common && nfs-kernel-server (on the proxmox host)
2) zfs set sharenfs=on zpool/data
3) On the server edited /etc/exports by addingJust read that this isn't necessary.
/zpool/data *(rw)
4) On the client edited /etc/fstab by adding
192.168.178.5:/zpool/data /mnt/zfs-data nfs auto 0 0
But I can't see files on the client created on the host and neither files on the host created by the client...
findmnt /zpool/data
and findmnt /mnt/zfs-data
respectively. Adding an entry to /etc/fstab
does not automatically mount the file system (that is, until the next reboot).findmnt /zpool/data gives:Is the filesystem mounted on both sides? Please check withfindmnt /zpool/data
andfindmnt /mnt/zfs-data
respectively. Adding an entry to/etc/fstab
does not automatically mount the file system (that is, until the next reboot).
findmnt /zpool/data gives:
/zpool/data zpool/data zfs rw,xattr,noacl
findmnt /mnt/zfs-data, gives:
nothing
I alway reboot the system after adjusting /etc/fstab, so that's should be fine.
mount -a
to mount the entries in /etc/fstab
(avoids the need to reboot).findmnt /zpool/data gives:
/zpool/data zpool/data zfs rw,xattr,noacl
findmnt /mnt/zfs-data, gives:
nothing
I alway reboot the system after adjusting /etc/fstab, so that's should be fine.
mount -a gives:Seems like on the client the file system wasn't mounted. You can also usemount -a
to mount the entries in/etc/fstab
(avoids the need to reboot).
Now I am completely lost. I had my ZFS pool nicely working and configured as I wanted to and now there is suddenly an error 'mount is not empty'. My setup is as follow:
- I have a raid-1 pool of 2 devices, which is called zpool.
- I have one dataset called 'data' with a sub dataset in it called 'proxmox'
- I have one encrypted dataset called 'edata' with an encrypted sub dataset in it called 'eproxmox'
- Proxmox uses the following storage locations on the ZFS:
View attachment 17036
/zpool/data/proxmox and /zpool/edata/eproxmox are added as 'directory type' as well as 'zfs type' in order to store disk images and containers (ZFS type) and templates (directory type).
- They normally should mount to /zpool/data, /zpool/edata, /zpool/data/proxmox, and /zpool/edata/eproxmox
- Regardless the non-encyrpted or encrypted datasets when mounting it says 'cannot mount /zpool/data': directory is not empty'.
Any help is greatly appreciated.
Yes, mixing storages can lead to such edge cases. But in your case, there is a way to fix it: You can make PVE skip directory creation on storage activation by setting theUPDATE:
- I tried 'zpool export zpool'
- Removed the directory /zpool with 'rm -r /zpool
- Rebooted, result same problem. Directory created again, but datasets remain unmounted.
- Can the issue be my template directories as shown above? That somehow PVE creates these directory quicker on boot than that it mounts the zfs datasets? Would be strange by the way because the system was earlier rebooted and did not had this problem?
UPDATE 2:
"- Can the issue be my template directories as shown above? That somehow PVE creates these directory quicker on boot than that it mounts the zfs datasets? Would be strange by the way because the system was earlier rebooted and did not had this problem?"
It seems that this is indeed the issue. After disabling the ZFS directories in PVE which are directories on the ZFS dataset (see below) the mount went fine. Strange that the problem did not arose earlier. Is there a solution to this, because disabling these directories before rebooting is not a structural solution. Is it maybe the fact that I use zpool/data/proxmox as ZFS type storage location for disk images and containers and /zpool/data/proxomx as directory for templates. I am documenting the post a bit so I hope it helps others as well.
View attachment 17038
mkdir
option to 0. It's not in the GUI, so you'll need to use pvesm set <STORAGE> --mkdir 0
.UPDATE 3:
This seems to only resolve the problem for the non-encrypted dataset, not for the encrypted dataset (still directory is not empty).
Yes, mixing storages can lead to such edge cases. But in your case, there is a way to fix it: You can make PVE skip directory creation on storage activation by setting themkdir
option to 0. It's not in the GUI, so you'll need to usepvesm set <STORAGE> --mkdir 0
.
That's strange. Which directories/files are present below the mount point for the encrypted dataset after it failed to mount?
The alternative would be to have the directory storage live on its own partition (then there is no "mixing"). I understand if you don't want that and it should work with the/zpool/edata/exproxmox/ contains directories 'dump' and 'template'.
What would be the alternative opposite for mixing storages? I thought about the following but than still you have the same potential issue:
- Current situation:
Dataset: zpool/edata/
Dataset: zpool/edata/eproxmox (added to PVE as ZFS type) and mounts to /zpool/edata/eproxmox
Folder: /zpool/edata/eproxmox (added to PVE as directory type) for storing templates*
- Alternative situation:
* Replacing 'Folder: /zpool/edata/eproxmox (added to PVE as directory type) for storing templates' with a folder /zpool/edata/eproxmox-templates, but this can still result in the same conflict, i.e. /zpool/edata/eproxmox-templates to be created earlier than zpool/edata/ dataset. In my understanding basically in all situations where you put the template directory on a dataset can give this problem. Or do I overlook something?
Regarding your solution 'pvesm set <STORAGE> --mkdir 0' where does <STORAGE> refer to? To the PVE storage ID's of the directories for templates. So in case of multiple directories the command has to be repeated? Why does PVE at all try to create the directory again at boot when adding it to the PVE should be sufficient I would think? A last, which configuration file does the 'pvesm set ...' command modify?
mkdir
option.<STORAGE>
has to be the storage ID/name for the directory storages, i.e. zfs-eproxmox-templates
and zfs-proxmox-templates
. The configuration file is /etc/pve/storage.cfg
, but please be careful with manual modifications.pvesm
, i've got an error: link3Yes, this was a cosmetic regression introduced in the initial PVE 6.3 release, where "cosmetic" means the storage was still created correctly. A fix is available in all package repositories since quite some time, please upgrade your packages.Hello!
I didn't want to create a new topic, hope this is a good place for my question.
I'm trying to setup an encrypted zfs dataset. I'm following the wiki, and have some questions:
1. I've created a zfs raid10 pool: link1
2. I've checked the encryption feature of the pool, and created the encrypted_data dataset: link2
3. When i tried to add it to proxmox withpvesm
, i've got an error: link3
Sounds good. You can use4. I've tried to add it again, but this time i used /VMs/encrypted_data instead VMs/encrypted_data as i used in the previous step, and got an error, that the dataset is already defined: link4
5. I removed the dataset, and tried to add it to proxmox for the third time: link5
6. Than i tried to add it again as VMs/encrypted_data with the error: 400 Result verification failed; config: type check ('object') failed. So now /VMs/encrypted_data is exists on proxmox, tried to mount it (link6) with error: cannot mount 'VMs/encrypted_data': filesystem already mounted
7. After a reboot and a zfs mount, it seems that it is working, at least after the zfs mount it asks for the passphrase....
zfs get mounted,mountpoint <dataset>
to see if and where the dataset is mounted. What does cat /etc/pve/storage.cfg
show? If there are duplicate entries left over from the "failed" attempts, it's best to remove those.So my question: did i do anything wrong that i get that so many errors during setting this up? This is just virtual proxmox to play around things like this, but my future plan is to setup my proxmox host with zfs raid10 with encryption.
Thanks any help you can provide!