Feature Request : Creating & Managing datasets

Harmen

New Member
Oct 16, 2016
7
1
3
47
The Netherlands
anmd.org
Hi there,

I know that there are ways to manage ZFS from within PVE but one thing I truly miss, is the creation and management of ZFS datasets.
Of course you can do that manually with zfs create and then setting whatever parameters you need, but it would be great if that task could be done from within the PVE GUI.

I make very frequent use of them, storing all my vm's in qcow2 images, living inside specific datasets. To PVE I configure them as directories as that's the only way to do it. :) (Also found out that you can not properly create a directory on a local ZFS filesystem.)
 
  • Like
Reactions: T.Herrmann
What do you mean by that?

That you next to creating datasets, you can't create or manipulate directories within existing filesystems.
There is no filebrowser to do standard stuff.

I know my way around Linux fairly well, but especially with something like PVE, I'd like to stick to the GUI as much as possible. :)
 
That you next to creating datasets, you can't create or manipulate directories within existing filesystems.
There is no filebrowser to do standard stuff.

Now it's clear. Thank you. Yet I thing this will not be implemented due to the fact that PVE is virtualisation, not a fileserver.

You can archive this by creating the datasets by hand and bind mount them to your fileserver container. Many users use it like this.
 
Now it's clear. Thank you. Yet I thing this will not be implemented due to the fact that PVE is virtualisation, not a fileserver.

You can archive this by creating the datasets by hand and bind mount them to your fileserver container. Many users use it like this.

While I think, such operations would benefit from a proper GUI around it.

Sure, PVE is not a fileserver, but it does make use of an underlying filesystem. Would be great to have at least basic manipulation tools for that. Like creating directories, or making zfs datasets.

Surprisingly PVE can make use of zvols, which are less convenient to use than datasets. I personally don't like them at all, also because of snapshots work way differently than with zfs datasets. Because of this: https://jrs-s.net/2016/06/16/psa-snapshots-are-better-than-zvols/

If you could easily make use of datasets, I think PVE would benefit from that. After all, it's PVE, not ESXi. :)
 
Surprisingly PVE can make use of zvols, which are less convenient to use than datasets. I personally don't like them at all, also because of snapshots work way differently than with zfs datasets. Because of this: https://jrs-s.net/2016/06/16/psa-snapshots-are-better-than-zvols/

What do you mean by "PVE can make use of zvols"? How would you use them in a container? For VMs, they are already used because it's the best fit.

Both dataset, which is file based system has its advantages as well as the block based zvol. What the author in the linked article wrote is not wrong about the differences, but the difference is due to the fact that you cannot run out of diskspace in a zvol, but you can run out of diskspace in a dataset if the pool is full - and 85% is really full and will decrease the performance significantly, because the metaslab allocator switched from best-fit to first-fit. ZFS will stop working if you reach something like 92% usage.

If you could easily make use of datasets, I think PVE would benefit from that.

Again: For what? Datasets are used for containers and they are also created for containers if it is not your root filesystem. Everything a virtualisation system needs is already there.
 
What do you mean by "PVE can make use of zvols"? How would you use them in a container? For VMs, they are already used because it's the best fit.

Both dataset, which is file based system has its advantages as well as the block based zvol. What the author in the linked article wrote is not wrong about the differences, but the difference is due to the fact that you cannot run out of diskspace in a zvol, but you can run out of diskspace in a dataset if the pool is full - and 85% is really full and will decrease the performance significantly, because the metaslab allocator switched from best-fit to first-fit. ZFS will stop working if you reach something like 92% usage.

Problem there is that if you have a 100G zvol and you want to create a snapshot, you need another 100G.
The whole idea of snapshots, is that you don't need the same exact amount of storage as you have in your original data, but that the snapshot is a delta. That's how they should work, only with zvols it works differently.

It's one of the reasons why I try not to use zvols.

I don't see proper need for zvols, while you can do really well with datasets and images for the vm's and LXC's inside. (I usually choose qcow2 for all my vm's, containers are stored as raw.)

Again: For what? Datasets are used for containers and they are also created for containers if it is not your root filesystem. Everything a virtualisation system needs is already there.

Nope.

The whole problem is that you can't create a dataset, can't create a directory, etc. There are a few bits possible if it comes to ZFS, you can even create a zpool if you still have unused disks. Which is great, but you usually don't create and destroy zpools very often, only at initial installation.

However, creating datasets and adding directories, is something you'd use more often.

So that's why this feature request is here. I think for now, there's room for improvement. One of the reasons I use PVE, is because of it's GUI. I just think some core functionality which should be in the GUI, is missing. There is proper progress, it's getting there and I expect to use PVE for many more year to come. Hope I will have a use case where I could use it professionally as well (so far I've only used it in private setups and for my foundation), but I'm not blind of the shortcomings.
 
Problem there is that if you have a 100G zvol and you want to create a snapshot, you need another 100G.
The whole idea of snapshots, is that you don't need the same exact amount of storage as you have in your original data, but that the snapshot is a delta. That's how they should work, only with zvols it works differently.

that's bogus, but seems to be a common misunderstanding.

if you have a fully-reserved zvol, the zvol itself will take up the full reservation (100G in your case). if you now create a snapshot, the snapshot will reference what is currently stored inside the dataset (let's call that X), and the total usage is now 100G + X. the usage is displayed confusingly if you don't understand what is going on the layer below, but it would only take up an additional 100G if the zvol was completely full of data that no previous snapshot is referencing.

Code:
root@nora:~# zfs create -V 100G fastzfs/testvol
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME             AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testvol   198G   103G        0B     56K           103G         0B

103G used, all by the reservation (zvol is empty)

Code:
root@nora:~# zfs snapshot fastzfs/testvol@snapshot
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testvol            198G   103G        0B     56K           103G         0B
fastzfs/testvol@snapshot      -     0B         -       -              -          -

snapshot does not reference any data, zvol unchanged

Code:
root@nora:~# dd if=/dev/urandom of=/dev/zvol/fastzfs/testvol bs=1M count=51200 status=progress
53547630592 bytes (54 GB, 50 GiB) copied, 224 s, 239 MB/s
51200+0 records in
51200+0 records out
53687091200 bytes (54 GB, 50 GiB) copied, 236.354 s, 227 MB/s
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testvol            148G   103G       56K   50.5G          52.6G         0B
fastzfs/testvol@snapshot      -    56K         -       -              -          -

wrote 50G, now we have 50G used + the reset reserved, since that data is only referenced by the zvol itself

Code:
root@nora:~# zfs snapshot fastzfs/testvol@snapshot2
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME                       AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testvol             148G   154G       56K   50.5G           103G         0B
fastzfs/testvol@snapshot       -    56K         -       -              -          -
fastzfs/testvol@snapshot2      -     0B         -       -              -          -

this is the confusing display - the 50G are now referenced by a snapshot and the zvol itself. they are accounted at the zvol level as long as they are still referenced there. once they are no longer referenced in the zvol, they will be accounted to the snapshot. the reservation is now again 103G, since that is the amount that we are still allowed to write to the zvol (all the current data is referenced in the snapshot, so it does not count).

effectively at this point, the used total data for the zvol+snapshots is the previous usage of the zvol + the full size of the zvol, compared to the full size of the zvol before. so creating a snapshot added the amount of currently referenced data to the total usage.

Code:
root@nora:~# dd if=/dev/urandom of=/dev/zvol/fastzfs/testvol bs=1M count=10240 status=progress
10502537216 bytes (11 GB, 9.8 GiB) copied, 43 s, 244 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 81.5687 s, 132 MB/s
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME                       AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testvol             138G   154G     10.1G   50.5G          93.0G         0B
fastzfs/testvol@snapshot       -    56K         -       -              -          -
fastzfs/testvol@snapshot2      -  10.1G         -       -              -          -

overwriting 10G of the old data with new random bytes, we can now see that the old content is accounted to snapshot2 (since it is still referenced there) and the reservation of the zvol is diminished by 10G. effectively this means writing to the zvol (no matter how much) does not change the total usage at this point.

Code:
root@nora:~# zfs snapshot fastzfs/testvol@snapshot3
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME                       AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testvol             138G   164G     10.1G   50.5G           103G         0B
fastzfs/testvol@snapshot       -    56K         -       -              -          -
fastzfs/testvol@snapshot2      -  10.1G         -       -              -          -
fastzfs/testvol@snapshot3      -     0B         -       -              -          -

creating another snapshots again bumps the reserveration to the full size - the 10G written between snapshot2 and snapshot3 are still displayed at the zvol level, since they are referenced there AND in snapshot3. once they are no longer referenced by the zvol, they will be accounted to snapshot3, just like the 10G in snapshot2.

you can do the same with a sparse/thin-provisioned/unreserved zvol, but keep in mind that as with all thin-provisioning, this allows over-committing your storage, if sufficiently many zvols get full enough, none of them can write anymore at all (in fact, nothing is guaranteed in the VM case, it's akin to a very broken disk to the VM).

setting the refreservation to none (i.e., making the zvol sparse):
Code:
root@nora:~# zfs set refreservation=none fastzfs/testvol
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME                       AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testvol             138G  60.6G     10.1G   50.5G             0B         0B
fastzfs/testvol@snapshot       -    56K         -       -              -          -
fastzfs/testvol@snapshot2      -  10.1G         -       -              -          -
fastzfs/testvol@snapshot3      -     0B         -       -              -          -

drops the total usage by the refreservation. but this also means that while the zvol is 100G big, nothing ensures that we can actually write (fresh) 100G of data to it.
 
  • Like
Reactions: guletz
same for a regular dataset:
Code:
root@nora:~# zfs create -o refquota=100G fastzfs/testsubvol
root@nora:~# zfs list -t all -r -o space fastzfs/testsubvol
NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testsubvol   100G    96K        0B     96K             0B         0B
root@nora:~# zfs snapshot fastzfs/testsubvol@snapshot
root@nora:~# zfs list -t all -r -o space fastzfs/testsubvol
NAME                         AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testsubvol            100G    96K        0B     96K             0B         0B
fastzfs/testsubvol@snapshot      -     0B         -       -              -          -
root@nora:~# dd if=/dev/urandom of=/fastzfs/testsubvol/testfile bs=1M count=51200 status=progress
53616836608 bytes (54 GB, 50 GiB) copied, 210 s, 255 MB/s
51200+0 records in
51200+0 records out
53687091200 bytes (54 GB, 50 GiB) copied, 210.273 s, 255 MB/s
root@nora:~# zfs list -t all -r -o space fastzfs/testsubvol
NAME                         AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testsubvol           50.5G  49.5G       64K   49.5G             0B         0B
fastzfs/testsubvol@snapshot      -    64K         -       -              -          -
root@nora:~# zfs snapshot fastzfs/testsubvol@snapshot2
root@nora:~# zfs list -t all -r -o space fastzfs/testsubvol
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testsubvol            50.0G  50.0G       64K   50.0G             0B         0B
fastzfs/testsubvol@snapshot       -    64K         -       -              -          -
fastzfs/testsubvol@snapshot2      -     0B         -       -              -          -
root@nora:~# dd if=/dev/urandom of=/fastzfs/testsubvol/testfile bs=1M count=10240 status=progress
10641997824 bytes (11 GB, 9.9 GiB) copied, 42 s, 253 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 42.3662 s, 253 MB/s
root@nora:~# zfs list -t all -r -o space fastzfs/testsubvol
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testsubvol            78.2G  59.6G     50.0G   9.53G             0B         0B
fastzfs/testsubvol@snapshot       -    64K         -       -              -          -
fastzfs/testsubvol@snapshot2      -  50.0G         -       -              -          -
root@nora:~# zfs snapshot fastzfs/testsubvol@snapshot3
root@nora:~# zfs list -t all -r -o space fastzfs/testsubvol
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
fastzfs/testsubvol            77.7G  60.0G     50.0G   10.0G             0B         0B
fastzfs/testsubvol@snapshot       -    64K         -       -              -          -
fastzfs/testsubvol@snapshot2      -  50.0G         -       -              -          -
fastzfs/testsubvol@snapshot3      -     0B         -       -              -          -

as you can see, the accounting is a bit different, but the total usage of a regular dataset and a sparse zvol are exactly the same. the difference is - when using a dataset, running out of space is simply returning ENOSPACE (like any other file system). with a zvol, it means a block device having less "physical" space that it says it has. the same applies to all other thin block device/disk image formats, whether they are LVM thin, qcow2, sparse raw, ... this problem is unavoidable for VMs - either you reserve the full space up front (no problem if you have enough space), or you thinly-provision (no need unless you need to over-commit, if you need to over-commit your risk your data). you can also reserve the full space for a regular dataset btw, if you want to ensure that applications/containers using it cannot get ENOSPACE because the underlying pool runs out of space, but just when they hit their quota.
 
Code:
root@nora:~# zfs create -V 100G fastzfs/testvol
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV    USEDCHILD
fastzfs/testvol   198G   103G        0B              56K              103G                     0B[CODE]
103G used, all by the reservation (zvol is empty)
[/QUOTE]
How come and you create 100G and the available space is bigger than what you created. Does that mean that it has created a portion of a 200gb disk which is 100gb?
Why the amount of space used (103Gb) is bigger than what is created.
Which values give you the assumption of zvol is empty? Is it USEDCHILD?

[/QUOTE]
 
Last edited:
Code:
root@nora:~# zfs create -V 100G fastzfs/testvol
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 198G 103G 0B 56K 103G 0B
103G used, all by the reservation (zvol is empty)
How come and you create 100G and the available space is bigger than what you created. Does that mean that it has created a portion of a 200gb disk which is 100gb?
Why the amount of space used (103Gb) is bigger than what is created.
Which values give you the assumption of zvol is empty? Is it USEDCHILD?
 
How come and you create 100G and the available space is bigger than what you created. Does that mean that it has created a portion of a 200gb disk which is 100gb?

because available is the free space that this dataset or children can use, not how much is "unused" inside the dataset..

Why the amount of space used (103Gb) is bigger than what is created.

because you always have some overhead for metadata etc

Which values give you the assumption of zvol is empty? Is it USEDCHILD?
no, usedds is 56K, and usedrefreserv is 103G, so all the "usage" is just the reservation, not actual data..
 
because available is the free space that this dataset or children can use, not how much is "unused" inside the dataset..
I didnt say anything about unused space, It s the same question again like the dataset/children can use more space than the initial who has been created?

because you always have some overhead for metadata etc
I got this part but then again how come the metadata cone exceed the initial given space. Doesnt make sense

no, usedds is 56K, and usedrefreserv is 103G, so all the "usage" is just the reservation, not actual data..
Burnt again.... and the reservation is which abbreviation?
 
I didnt say anything about unused space, It s the same question again like the dataset/children can use more space than the initial who has been created?
I am not sure what exactly the question is.. 'available' just says how much more space this dataset can (in theory) use, it is not related to how much it is currently using, or how big the reservation/volume is..
I got this part but then again how come the metadata cone exceed the initial given space. Doesnt make sense
for zvols, the metadata is not part of the volume size but separate, so you always have an overhead (for regular filesystem datasets, the metadata is part of the dataset so accounted differently).
Burnt again.... and the reservation is which abbreviation?
'usedrefreserv'
 
Last effort since I got tired and I don t wont to bother you with this again. The part with the abbreviation was straigth forward so ok with that. Nice

for zvols, the metadata is not part of the volume size but separate, so you always have an overhead (for regular filesystem datasets, the metadata is part of the dataset so accounted differently).
My logic dictates (...and not ZFS's rules, that is what I am trying to figure out after all): When you create a storage/space (in your case 100GB) there would be 2 options/chances

-Either the underlying OS/Hypervisor/FIlesystem(better) knows that it will need space for metadata and allocates from the 100GB created, 10-20Gb for that purpose. In this case available space would be 80GB to be used as the user see fits and 20GB for metadata So sum would be 100 and not 198Gb available like in your example (I can t get how this 198 is being calculated)
-Or the underlying OS/Hypervisor/FIlesystem(better) knows that it will need space for metadata and on top of that 100GB Reserves additional 10-20Gb outside of these 100GB. Of course for this to happen, the user must always calculate how much GBs that additional free space would be and take that into consideration, in order to have that extra space. I dont think this wrong logic applies to zfs filesystem (except it does?)

On both above explanations I could understand how it works but numbers of your example dictate otherwise That is what I am trying to figure out how these numbers of the example are being calculated. If you still have the courage to answer :) please use abbreviations (all of them if possible) AVAIL / USED / USEDSNAP / USEDDS / USEDREFRESERV / USEDCHILD
 
Last edited:
Last effort since I got tired and I don t wont to bother you with this again. The part with the abbreviation was straigth forward so ok with that. Nice


My logic dictates (...and not ZFS's rules, that is what I am trying to figure out after all): When you create a storage/space (in your case 100GB) there would be 2 options/chances

-Either the underlying OS/Hypervisor/FIlesystem(better) knows that it will need space for metadata and allocates from the 100GB created, 10-20Gb for that purpose. In this case available space would be 80GB to be used as the user see fits and 20GB for metadata So sum would be 100 and not 198Gb available like in your example (I can t get how this 198 is being calculated)

-Or the underlying OS/Hypervisor/FIlesystem(better) knows that it will need space for metadata and on top of that 100GB Reserves additional 10-20Gb outside of these 100GB. Of course for this to happen, the user must always calculate how much GBs that additional free space would be and take that into consideration, in order to have that extra space. I dont think this wrong logic applies to zfs filesystem (except it does?)
with ZFS filesystem datasets, it works a bit like this - if you say you want a dataset that can store up to 100G, that will be set as quota, and usage (both data and metadata) gets accounted towards that. with zvols, if you say "I want a volume that can store 100G", that is 100G of data, and the metadata gets accounted on top of that (and in your case, you end up with a reservation of 103G instead of just 100G).
On both above explanations I could understand how it works but numbers of your example dictate otherwise That is what I am trying to figure out how these numbers of the example are being calculated. If you still have the courage to answer :) please use abbreviations (all of them if possible) AVAIL / USED / USEDSNAP / USEDDS / USEDREFRESERV / USEDCHILD
avail has nothing to do with the currently used space - it tells you how much more you can use (e.g., in case of zvols, by creating a snapshot, or if it is a thin volume, by writing data, for filesystems different stuff applies). see the zfsprops man page:

Code:
available             The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool.  Be‐
                           cause space is shared within a pool, availability can be limited by any number of factors, including physical pool size, quo‐
                           tas, reservations, or other datasets within the pool.

that same man page also gives you descriptions of the used* parameters:

Code:
     used                  The amount of space consumed by this dataset and all its descendents.  This is the value that is checked against this dataset's
                           quota and reservation.  The space used does not include this dataset's reservation, but does take into account the reservations
                           of any descendent datasets.  The amount of space that a dataset consumes from its parent, as well as the amount of space that
                           is freed if this dataset is recursively destroyed, is the greater of its space used and its reservation.

                           The used space of a snapshot (see the Snapshots section of zfsconcepts(8)) is space that is referenced exclusively by this
                           snapshot.  If this snapshot is destroyed, the amount of used space will be freed.  Space that is shared by multiple snapshots
                           isn't accounted for in this metric.  When a snapshot is destroyed, space that was previously shared with this snapshot can be‐
                           come unique to snapshots adjacent to it, thus changing the used space of those snapshots.  The used space of the latest snap‐
                           shot can also be affected by changes in the file system.  Note that the used space of a snapshot is a subset of the written
                           space of the snapshot.

                           The amount of space used, available, or referenced does not take into account pending changes.  Pending changes are generally
                           accounted for within a few seconds.  Committing a change to a disk using fsync(2) or O_SYNC does not necessarily guarantee that
                           the space usage information is updated immediately.

     usedby*               The usedby* properties decompose the used properties into the various reasons that space is used.  Specifically, used =
                           usedbychildren + usedbydataset + usedbyrefreservation + usedbysnapshots.  These properties are only available for datasets cre‐
                           ated on zpool "version 13" pools.

     usedbychildren        The amount of space used by children of this dataset, which would be freed if all the dataset's children were destroyed.

     usedbydataset         The amount of space used by this dataset itself, which would be freed if the dataset were destroyed (after first removing any
                           refreservation and destroying any necessary snapshots or descendents).

     usedbyrefreservation  The amount of space used by a refreservation set on this dataset, which would be freed if the refreservation was removed.

     usedbysnapshots       The amount of space consumed by snapshots of this dataset.  In particular, it is the amount of space that would be freed if all
                           of this dataset's snapshots were destroyed.  Note that this is not simply the sum of the snapshots' used properties because
                           space can be shared by multiple snapshots.

free space calculations in ZFS are always a bit more involved than with other systems, since it does physical and logical volume management, compression, snapshots, and a filesystem all in one piece of software.
 
  • Like
Reactions: ieronymous
@fabian thank you very much for your time and effort to explain all this. I really appreciate your continues explanations, instead of just posting a link with ZFS file system manual and proposing to read it through since the answer is in there.

PS By the way if you want to check my (despite the long post) simple question to the following link https://forum.proxmox.com/threads/d...r-zfs-my-use-case-scenario.88087/#post-386321 I ll be glad to read your opinion!!

Thank you once more!!!
 
Since it is still related to the post thematology, after reading all https://www.freebsd.org/cgi/man.cgi?query=zfs&sektion=8&manpath=FreeBSD+7.0-RELEASE for Datasets and my head melted from terminologies/options/parameters/etc..... please enlighten me in this use case of a Dataset.

Assuming I have created a zpool (zvolume) of 2Tb with name HH and inside there I need to create a Dataset for proxmox with name bckup and capacity of 500G to have space for VM/LXC backups.
So by running
zfs create -o=mountpoint=/...... HH/bckup (I typed ..... on purpose because if I dont specify mountpoint, automatically it will mount it at /bckup right? Do I have the option to mount it in whatever path I choose even if it is to /mnt/backups? which links to another
zpool, the main one? or /mnt/bckup isnt a folder of the main zpool and it will just create the folder mnt in the HH
dataset which is not zpool by the way. Cant clarif that part at all by wiki)

zfs set quota=500G HH/bckup (but how to force that dataset not to exceed the 85% of that space, cant find the command for that)
zfs set compression=on HH/bckup (or better from scratch set compression=lz4 which is the default value)

....and last questions follows up
- Do I need to (after Dataset creation) create seperate folders inside for daily/weekly/monthly backups or it will be done automatically by proxmox
when I present that bckup space to it? I mean there is got to be a way for things inside backups to be organized and my assumption is that prox
will create default folders for that. Right? Do we know how these paths would be named?

- Even though it explains a way to run all the above options as one command line I didnt found an example of doing that instead it creates the
Dataset and then begins issuing the set sub command. Should I use -o for each option like
zfs create -o mountpoint=.... -o quote=.... -o compression=..... HH/bckup

- Is the default recordsize during Dataset creation 128k (Is there a better number for Dataset for backups on disks with 512b logical and
4096physical? disk attributes)

- If I was to create just the Dataset without options, that would be considered the parent Dataset since there is no other Dataset above to inherit properties-attributes. In that case it just inherits the default ones?

- Any other options during Dataset creation that could be mandatory and forget to mention here? I ve already said the reason the Dataset is
going to be used for.


I think if these questions will be answered will help a lot more people than just me

Thank you !!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!