How to properly configure encrypted ZFS storage only?

ams_tschoening

New Member
Jun 28, 2021
22
8
3
43
I'm setting up Proxmox 6.4 with most storage being ZFS RAID1 by simply using the options in the graphical installer. Some GiB of the available space are left for SWAP using MDRAID, because it's advised to not put SWAP into ZFS yet. The goal is to host mainly VMs and most of those need to be stored encrypted, so I created an encrypted dataset and added that as storage using "pvesm". The workflow will be that in case the server needs reboots, the dataset will be mounted manually by providing a password. During setting things up, I recognized multiple different things I would like to ask about:

1. Is there always a "local" storage added when not available in the config?

My goal was to store everything by default in the encrypted dataset, so I tried to remove the storage named "local" of type "dir" of the installer. Though, after adding and removing different storages, that "local" was recreated and this time to be able to store all kinds of contents. The latter was not the case with what the installer added, because that configured "images" and "rootdir" for ZFS only. This doesn't as well seem to depend on if there's a storage of type "dir" or not, because I created one and "local" was created anyway.

Do things depend on the name "local", which I didn't ever use in my tests? Or is Proxmox checking paths and expects something like "var/lib/vz"? Because that's part of ZFS in the end anyway.

2. Should I simply change the path of the default "local" storage to point somewhere else?

I'm wondering because "pvesm set ..." doesn't seem to support an option named "--path", while "pvesm add ..." does and "--pool" is supported for "set" as well. Looks like changing a path is not supported, while changig a pool is?

3. Why are some contents not allowed for storage of type "zfspool"?

When trying to create a new ZFS storage, I provided all values in the docs for "--content", only to see that many of those are not allowed. Even though ZFS is documented to be file level storage pretty much like "dir".

Code:
storage does not support content type 'vztmpl'
storage does not support content type 'backup'
storage does not support content type 'iso'
storage does not support content type 'snippets'

Is there any reason for those restrictions? I'm wondering because what's missing for "zfspool" is maintained by local" of type "dir" by default. Though, I'm easily able to create a storage of type "dir" with all the available contents and a path within the ZFS pool. So why is there such a distinction?

4. Do I need to create the storage as type "dir" instead of "zfspool"?

I would like to store all data preferably in the encrypted ZFS dataset and at the same time need to mount that manually. Though, "zfspool" doesn't support some contents at all and it doesn't seem to support the "--is_mountpoint" with a setting of "yes". At least I get an error message about an "unexpected property" of that name, which is NOT the case when using storage type "dir" instead of "zfspool".

Does this mean storage of type "zfspool" is expected to be always available and can't be additionally mounted manually? In other posts, "is_mountpoint" is explicitly mentioned for my use case of encrypted ZFS, but as well only with type "dir" providing the underlying storage. I don't really get why it needs to be that way.

5. What are the snapshots-related differences of "dir" and "zfspool"?

According to the docs of the storage types, ZFS supports snapshots, while "dir" doesn't, with the exception of when using "qcow2" image formats for VMs. This reads to my like the snapshot feature is differently implemented, e.g. at the level of the file system in case of ZFS or QEMU in case of "dir" or alike. In the latter case, working with additional files etc., pretty much like VMware, VirtualBox etc. do for their VM-specific snapshots as well using file systems without snapshot support like ext4, NTFS etc.

Thanks!
 
Last edited:
1. Is there always a "local" storage added when not available in the config?

According to my tests, it seems exactly that way. Repeated execution of " pvesm remove local" succeeds as well, while any other name can only be removed once and afterwards an error is printed. Though, the "local" configuration is not always visible in the config file, but regarding my tests it's added at least whenever another storage is added for some reason. During runtime it always seems to be available.

2. Should I simply change the path of the default "local" storage to point somewhere else?

Wasn't able to do so using the shell because "--path" is missing for "pvesm set ...". Additionally, the default "local" storage created by the installer doesn't contain all content types, because "images" and "rootdir" were added zo the created ZFS based storage. So changing the path of "local" alone wouldn't result in one storage only for all available data.

Instead, I decided to use "pvesm set local --disable yes" to disable the default storage and create one replacement for my needs only. Though, there's a risk in doing so, because "local" seems to be a hard-coded fallback for some functions, like hibernate. At least in this case my tests ran fine with default settings auf " Automatic" for state storage, because state could simply be stored alongside already available images.

3. Why are some contents not allowed for storage of type "zfspool"?

???

4. Do I need to create the storage as type "dir" instead of "zfspool"?

Yes, it looks that way to me. Otherwise I was simply not able to store all available contents within the one storage I would like to have only. Of course in the end it doesn't make too much difference from a storage point of view, because everything is stored in ZFS. But it does make a difference in the web-UI and especially regarding if one needs to make extra sure that things are encrypted or not.

5. What are the snapshots-related differences of "dir" and "zfspool"?

In case of using storage type "dir", snapshots are only possible with QCOW2 image format, while Proxmox allows to create VMs with VDMK and RAW as well. Though, in the latter cases snapshots are not supported and Proxmox is telling about that with a hint in the corresponding web-UI about current guest configuration. Consequently, after removing some VMDK from some test-VM in favor of some QCOW2 image, creating snapshots was instantly available again. Additionally keep in mind that all snapshots seem to be stored within the QCOW2 image file created for some HDD. Containers OTOH seem to be always created using RAW image format and therefore don't support snapshots without properly configured ZFS storage.

In case of storage type "zfspool", HDD images are created using the format RAW and a ZVOL always and the created snapshots are really ZFS-level. One can check that using "zfs list -t snapshot". I thought I was able to create the same image types like for storage type "dir", but simply have chosen the wrong storage for my tests. How containers are stored fits the "rootdir" content type as well now: One dataset per container.

So in the end, one needs to choose if ZFS- or app-level snapshots are preferred.
 
Last edited:
3. Why are some contents not allowed for storage of type "zfspool"?

When trying to create a new ZFS storage, I provided all values in the docs for "--content", only to see that many of those are not allowed. Even though ZFS is documented to be file level storage pretty much like "dir".

Code:
storage does not support content type 'vztmpl'
storage does not support content type 'backup'
storage does not support content type 'iso'
storage does not support content type 'snippets'

Is there any reason for those restrictions? I'm wondering because what's missing for "zfspool" is maintained by local" of type "dir" by default. Though, I'm easily able to create a storage of type "dir" with all the available contents and a path within the ZFS pool. So why is there such a distinction?

4. Do I need to create the storage as type "dir" instead of "zfspool"?

I would like to store all data preferably in the encrypted ZFS dataset and at the same time need to mount that manually. Though, "zfspool" doesn't support some contents at all and it doesn't seem to support the "--is_mountpoint" with a setting of "yes". At least I get an error message about an "unexpected property" of that name, which is NOT the case when using storage type "dir" instead of "zfspool".

Does this mean storage of type "zfspool" is expected to be always available and can't be additionally mounted manually? In other posts, "is_mountpoint" is explicitly mentioned for my use case of encrypted ZFS, but as well only with type "dir" providing the underlying storage. I don't really get why it needs to be that way.
ZFS only knows two things. Zvols (thats a block device) and datasets (file level storage). LXCs are using datasets and VMs zvols as virtual disks.
Like you already said you can use a dir storage pointing to the mountpoint of a dataset if you want to store other stuff on that pool. Every dataset is its own filesystem, so its a good idea to create a new dataset for every dir storage you add. That way it is easier to optimize zfs options for the workload that is using the storage.
5. What are the snapshots-related differences of "dir" and "zfspool"?

According to the docs of the storage types, ZFS supports snapshots, while "dir" doesn't, with the exception of when using "qcow2" image formats for VMs. This reads to my like the snapshot feature is differently implemented, e.g. at the level of the file system in case of ZFS or QEMU in case of "dir" or alike. In the latter case, working with additional files etc., pretty much like VMware, VirtualBox etc. do for their VM-specific snapshots as well using file systems without snapshot support like ext4, NTFS etc.
With ZFS snapshots rolling back is a one-way road. Do it and everything newer than the snapshot you rolled back to will be deleted forever. Because of that PVE only allows you to rollback to the most recent snapshot so poeple who doesn't know it better don't loose to much stuff. If you got 100 snapshots you can rollback to each one, but you would need to delete the newer ones first. A little bit annoying to delete 99 snapshots to be able to rollback to the oldest one.
There is more complex stuff like clones and so on but PVE isn't using it. You normally don't rollback with ZFS, if you just want to access some old files or want to test something. You would create a clone based on the data of a snapshot and work with that clone instead.
If I unterstand it right qcow2 allows you to switch between snapshots. And then there are LVM snapshots too.
They all are different and you would need to read the documentations of the filesystems to see how to use them, for what they are good and for what not.
If you want ZFS snapshots you can't store your VMs on dir storage or "qcow2" format will be used for your virtual disks. For ZFS snapshots you really need to add the VM image storage as "ZFS" so virtual disks are created as "raw" format.

If you define a storage as state storage you can also dump the RAM and save it with the ZFS snapshot. That way you can rollback into a running VM. I would recommend to use the proxmox backup server for daily/weekly backups. That way you can do alot of backups because it is incremental and using deduplication so its quite fast and doesn't consume alot of space. Snapshots won't help you if the complete pool degrades, but a BPS on another disk/host will do. And you don't need to replace the old VM when restoring a backup. Give it another ID and you get a clone from the past to work with. But snapshots are still a nice addition if you want a fast save of your VMs that needs to run hourly or every several minutes.
 
Last edited:
  • Like
Reactions: ams_tschoening
ZFS only knows two things. Zvols (thats a block device) and datasets (file level storage). LXCs are using datasets and VMs zvols as virtual disks.
That doesn't explain why Proxmox decided to not be able to store other contents in other datasets. They are creating additional subdirs in case of storage type "dir" already, I don't see a reason why not implementing that approach with datasets for storage of type "zfspool" as well.
If you want ZFS snapshots you can't store your VMs on dir storage or "qcow2" format will be used for your virtual disks. For ZFS snapshots you really need to add the VM image storage as "ZFS" so virtual disks are created as "raw" format.
I don't necessarily need ZFS snapshots with my VMs, just any snapshots and QCOW2 might be good enough as well. Of course both have their PROs and CONs: I don't like using ZVOL too much and prefer datasets and files instead, while QCOW2 with all snapshots in one file doesn't sound too good for e.g. RSYNC as well. OTOH, those snapshots are most likely only for short usage between updates of VMs etc. anyway.

Though, when creating containers on storage of type "dir" it seems that image format RAW is used always, which doesn't seem to support snapshots at all. Need to think about that and am not using containers right now anyway.
If you define a storage as state storage you can also dump the RAM and save it with the ZFS snapshot. That way you can rollback into a running VM.
This seems to work with storage type "dir" and QCOW2 as well.
I would recommend to use the proxmox backup server for daily/weekly backups.[...]
Don't have another host available currently to run that, but will keep it in mind. I'm not using snapshots as replacement for backups anyway, really only in case of OS-upgrades and stuff like that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!