I'm setting up Proxmox 6.4 with most storage being ZFS RAID1 by simply using the options in the graphical installer. Some GiB of the available space are left for SWAP using MDRAID, because it's advised to not put SWAP into ZFS yet. The goal is to host mainly VMs and most of those need to be stored encrypted, so I created an encrypted dataset and added that as storage using "pvesm". The workflow will be that in case the server needs reboots, the dataset will be mounted manually by providing a password. During setting things up, I recognized multiple different things I would like to ask about:
1. Is there always a "local" storage added when not available in the config?
My goal was to store everything by default in the encrypted dataset, so I tried to remove the storage named "local" of type "dir" of the installer. Though, after adding and removing different storages, that "local" was recreated and this time to be able to store all kinds of contents. The latter was not the case with what the installer added, because that configured "images" and "rootdir" for ZFS only. This doesn't as well seem to depend on if there's a storage of type "dir" or not, because I created one and "local" was created anyway.
Do things depend on the name "local", which I didn't ever use in my tests? Or is Proxmox checking paths and expects something like "var/lib/vz"? Because that's part of ZFS in the end anyway.
2. Should I simply change the path of the default "local" storage to point somewhere else?
I'm wondering because "pvesm set ..." doesn't seem to support an option named "--path", while "pvesm add ..." does and "--pool" is supported for "set" as well. Looks like changing a path is not supported, while changig a pool is?
3. Why are some contents not allowed for storage of type "zfspool"?
When trying to create a new ZFS storage, I provided all values in the docs for "--content", only to see that many of those are not allowed. Even though ZFS is documented to be file level storage pretty much like "dir".
Is there any reason for those restrictions? I'm wondering because what's missing for "zfspool" is maintained by local" of type "dir" by default. Though, I'm easily able to create a storage of type "dir" with all the available contents and a path within the ZFS pool. So why is there such a distinction?
4. Do I need to create the storage as type "dir" instead of "zfspool"?
I would like to store all data preferably in the encrypted ZFS dataset and at the same time need to mount that manually. Though, "zfspool" doesn't support some contents at all and it doesn't seem to support the "--is_mountpoint" with a setting of "yes". At least I get an error message about an "unexpected property" of that name, which is NOT the case when using storage type "dir" instead of "zfspool".
Does this mean storage of type "zfspool" is expected to be always available and can't be additionally mounted manually? In other posts, "is_mountpoint" is explicitly mentioned for my use case of encrypted ZFS, but as well only with type "dir" providing the underlying storage. I don't really get why it needs to be that way.
5. What are the snapshots-related differences of "dir" and "zfspool"?
According to the docs of the storage types, ZFS supports snapshots, while "dir" doesn't, with the exception of when using "qcow2" image formats for VMs. This reads to my like the snapshot feature is differently implemented, e.g. at the level of the file system in case of ZFS or QEMU in case of "dir" or alike. In the latter case, working with additional files etc., pretty much like VMware, VirtualBox etc. do for their VM-specific snapshots as well using file systems without snapshot support like ext4, NTFS etc.
Thanks!
1. Is there always a "local" storage added when not available in the config?
My goal was to store everything by default in the encrypted dataset, so I tried to remove the storage named "local" of type "dir" of the installer. Though, after adding and removing different storages, that "local" was recreated and this time to be able to store all kinds of contents. The latter was not the case with what the installer added, because that configured "images" and "rootdir" for ZFS only. This doesn't as well seem to depend on if there's a storage of type "dir" or not, because I created one and "local" was created anyway.
Do things depend on the name "local", which I didn't ever use in my tests? Or is Proxmox checking paths and expects something like "var/lib/vz"? Because that's part of ZFS in the end anyway.
2. Should I simply change the path of the default "local" storage to point somewhere else?
I'm wondering because "pvesm set ..." doesn't seem to support an option named "--path", while "pvesm add ..." does and "--pool" is supported for "set" as well. Looks like changing a path is not supported, while changig a pool is?
3. Why are some contents not allowed for storage of type "zfspool"?
When trying to create a new ZFS storage, I provided all values in the docs for "--content", only to see that many of those are not allowed. Even though ZFS is documented to be file level storage pretty much like "dir".
Code:
storage does not support content type 'vztmpl'
storage does not support content type 'backup'
storage does not support content type 'iso'
storage does not support content type 'snippets'
Is there any reason for those restrictions? I'm wondering because what's missing for "zfspool" is maintained by local" of type "dir" by default. Though, I'm easily able to create a storage of type "dir" with all the available contents and a path within the ZFS pool. So why is there such a distinction?
4. Do I need to create the storage as type "dir" instead of "zfspool"?
I would like to store all data preferably in the encrypted ZFS dataset and at the same time need to mount that manually. Though, "zfspool" doesn't support some contents at all and it doesn't seem to support the "--is_mountpoint" with a setting of "yes". At least I get an error message about an "unexpected property" of that name, which is NOT the case when using storage type "dir" instead of "zfspool".
Does this mean storage of type "zfspool" is expected to be always available and can't be additionally mounted manually? In other posts, "is_mountpoint" is explicitly mentioned for my use case of encrypted ZFS, but as well only with type "dir" providing the underlying storage. I don't really get why it needs to be that way.
5. What are the snapshots-related differences of "dir" and "zfspool"?
According to the docs of the storage types, ZFS supports snapshots, while "dir" doesn't, with the exception of when using "qcow2" image formats for VMs. This reads to my like the snapshot feature is differently implemented, e.g. at the level of the file system in case of ZFS or QEMU in case of "dir" or alike. In the latter case, working with additional files etc., pretty much like VMware, VirtualBox etc. do for their VM-specific snapshots as well using file systems without snapshot support like ext4, NTFS etc.
Thanks!
Last edited: