ZFS dataset question (simple)

HenryTheTech

Active Member
May 20, 2018
54
3
28
53
I am stuck with this one and I know it should be a simple answer, so sorry in advance...


Code:
zvol/henry/CQFsync                             3.10G  4.90G    96K  /zvol/henry/CQFsync
zvol/henry/CQFsync/subvol-100-disk-0           3.10G  1.90G  3.10G  /zvol/henry/CQFsync/subvol-100-disk-0

This seems obviously correct to me as the child container is 3.10GB and therefore the remainder 1.90GB is correct.

But then in my torrent dataset, it looks like this:


Code:
zvol/henry/torrent                              241G   159G    96K  /zvol/henry/torrent
zvol/henry/torrent/subvol-100-disk-0            241G  8.60G   241G  /zvol/henry/torrent/subvol-100-disk-0

The quota on henry/torrent = 400G so that is correct (241 + 159 = 400)


I cannot for the life of me figure out why there is only 8.6G remaining when there should be 159G remaining in the subvol system

there are no snapshots or anything also
 
I cannot for the life of me figure out why there is only 8.6G remaining when there should be 159G remaining in the subvol system

You created the LX(C) container with a default size of your disk. Therefore the disk has a limit that is lower than the disk limit. You can query this by:

Code:
zfs get refquota zvol/henry/torrent/subvol-100-disk-0
 
You are correct sir:

Code:
zfs get refquota zvol/henry/torrent/subvol-100-disk-0

zvol/henry/torrent/subvol-100-disk-0  refquota  250G      local

Might I ask how to increase this? The following is having no effect:

Code:
zfs set quota=400G zvol/henry/torrent/subvol-100-disk-0
 
  • Like
Reactions: bugmenot
zfs set quota=400G zvol/henry/torrent/subvol-100-disk-0

Use refquota:

Code:
zfs set refquota=400G zvol/henry/torrent/subvol-100-disk-0

You also have to change the disk size in PVE itself, so it would be best to just increase the size of the disk in PVE directly. Internally, the quota command will be issued by PVE.
 
  • Like
Reactions: HenryTheTech
You sir are the best in the world.

I had no idea there was a difference between quota and refquota. From this link:

Quota limits the overall size of a dataset and all of it's children and snapshots while refquota applies to only to data directly referred to from within that dataset.

Quota would be useful if you delegated a dataset to another user (with permission to create additional datasets under that one) or if you wanted to limit the overall size of a given dataset. For instance, the /home directory of a multi user file server could be limited to 10TB, which would ensure that the sum of all user home datasets and snapshots of said datasets could not exceed 10TB.

Refquota would be helpful if you had users who tend to overload a specific dataset. In our above example, each users home directory might be limited to a 100GB quota and a 50GB refquota. This would mean their home directory could contain 50GB of data, but the sum of the live dataset and all snapshots couldn't exceed 100GB.

For anyone else wondering where this is accomplished its in the container or vm menu under /rescources ---> resize disk.
 
Use refquota:

Code:
zfs set refquota=400G zvol/henry/torrent/subvol-100-disk-0

You also have to change the disk size in PVE itself, so it would be best to just increase the size of the disk in PVE directly. Internally, the quota command will be issued by PVE.

Thanks a lot for this. I am optimizing a few LXCs storage size because initially I set them too large.
Using refquota worked for all of them, except for one, that was initially configured at around 90-100GB:

Code:
❯ zfs list pve-data/subvol-107-disk-0
NAME                         USED  AVAIL  REFER  MOUNTPOINT
pve-data/subvol-107-disk-0  92.7G  22.9G  27.1G  /pve-data/subvol-107-disk-0
❯ zfs get refquota pve-data/subvol-107-disk-0
NAME                        PROPERTY  VALUE     SOURCE
pve-data/subvol-107-disk-0  refquota  50G       local

Proxmox tells me that I'm using 27GB out of the 50GB.

Is there anything I can do to "normalize" the used/avail/refer values I'm seeing via zfs list? I hope it's not corrupted due to the shrinking...

Thanks for any advice you might give me.
 
The discrepancy between refer and used is due to snapshots and decendant datasets. refquota will only restrict the referenced space in the current state. There is also quota that will also restrict the used space value (referenced by current and all snapshots).

For documentation of each zfs attribute, please see the manpage.
 
The discrepancy between refer and used is due to snapshots and decendant datasets

I see no snapshots or related descendants that could justify that USED value. But I'm looking at the UI, I think I will have to start digging deep into the ZFS CLI.

There is also quota that will also restrict the used space value

I had already tried quota, but it gave me the following error:

Code:
❯ zfs set quota=50G pve-data/subvol-107-disk-0
cannot set property for 'pve-data/subvol-107-disk-0': size is less than current used or reserved space

So I guess that subvol actually contains data, it's not just a configuration issue.

Since I use PBS, I was thinking to simply scratch the LXC and restore it via PBS so it would recreate it, hoping it would respect the updated configuration.

Thanks for your help.
 
Digging into man is always helpful:

Code:
❯ zfs list pve-data/subvol-107-disk-0 -o usedbysnapshots
USEDSNAP
   66.6G

Code:
❯ zfs list pve-data/subvol-107-disk-0 -o usedbydataset
USEDDS
 27.4G

So it's the snapshots...could it be PBS? I didn't do any snapshot.
 
You can list the snapshots with
Code:
zfs list -t all -r pve-data/subvol-107-disk-0

Their name will probably imply what created the snapshot.
 
Their name will probably imply what created the snapshot.

Looks like zfs-auto-snapshot, but I never used/installed it. I'm really confused now. :(

Code:
pve-data/subvol-107-disk-0@zfs-auto-snap_daily-2024-01-22-0525     44.2M      -  27.4G  -
pve-data/subvol-107-disk-0@zfs-auto-snap_hourly-2024-01-22-0617    48.5M      -  27.4G  -
pve-data/subvol-107-disk-0@zfs-auto-snap_hourly-2024-01-22-0717    49.5M      -  27.4G  -
pve-data/subvol-107-disk-0@zfs-auto-snap_hourly-2024-01-22-0817    48.7M      -  27.4G  -
pve-data/subvol-107-disk-0@zfs-auto-snap_hourly-2024-01-22-0917    48.9M      -  27.4G  -
pve-data/subvol-107-disk-0@zfs-auto-snap_hourly-2024-01-22-1017    49.8M      -  27.4G  -
 
Last edited:
Ok, cleaned everything on this node...did this for all labels (hourly, daily, frequent, etc.)

Code:
❯ zfs-auto-snapshot --destroy-only --verbose --label=weekly --keep=1 -r pve-data

Then removed zfs-auto-snapshot debian package. All good now:

Code:
❯ zfs list -t all -r pve-data
NAME                         USED  AVAIL  REFER  MOUNTPOINT
pve-data                    64.2G   858G   112K  /pve-data
pve-data/subvol-101-disk-0  2.95G  7.05G  2.95G  /pve-data/subvol-101-disk-0
pve-data/subvol-102-disk-1  5.05G  9.95G  5.05G  /pve-data/subvol-102-disk-1
pve-data/subvol-103-disk-0  2.52G  7.48G  2.52G  /pve-data/subvol-103-disk-0
pve-data/subvol-105-disk-0  11.7G  8.29G  11.7G  /pve-data/subvol-105-disk-0
pve-data/subvol-106-disk-0  11.1G  8.86G  11.1G  /pve-data/subvol-106-disk-0
pve-data/subvol-107-disk-0  27.4G  22.6G  27.4G  /pve-data/subvol-107-disk-0
pve-data/subvol-110-disk-0  3.16G  8.84G  3.16G  /pve-data/subvol-110-disk-0

Thanks a lot for your help.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!