[SOLVED] Enable sparse on existing ZFS storage

limone

Well-Known Member
Aug 1, 2017
89
9
48
30
Hi,

I noticed my manually created zfs pool is not sparse (thin).

Is there an easy way to activate it afterwards?
I could easily add "sparse 1" to my storage config, but would that work, and most important, not destroy the data?
 
that would only affect newly create volumes/datasets. but you can easily set the reservation for existing ones manually as well - see 'man zfs'
 
I've set it to sparse 1 now, and migrate the disks from that datastore to another and then back, so it's a new disk, right?
(zfs set reservation=0 NVMe/vm-540-disk-1 doesn't change anything)
 
Last edited:
Ok looks like migrating to another storage and back does not do the trick

before:
Code:
root@pve-lab:~# zfs get all NVMe/vm-580-disk-0 | grep used
NVMe/vm-580-disk-0  used                  10.3G                  -
NVMe/vm-580-disk-0  usedbysnapshots       0B                     -
NVMe/vm-580-disk-0  usedbydataset         7.66G                  -
NVMe/vm-580-disk-0  usedbychildren        0B                     -
NVMe/vm-580-disk-0  usedbyrefreservation  2.66G                  -
NVMe/vm-580-disk-0  logicalused           7.62G                  -

after:
Code:
root@pve-lab:~# zfs get all NVMe/vm-580-disk-0 | grep used
NVMe/vm-580-disk-0  used                  9.96G                  -
NVMe/vm-580-disk-0  usedbysnapshots       0B                     -
NVMe/vm-580-disk-0  usedbydataset         9.96G                  -
NVMe/vm-580-disk-0  usedbychildren        0B                     -
NVMe/vm-580-disk-0  usedbyrefreservation  0B                     -
NVMe/vm-580-disk-0  logicalused           9.93G                  -
 
you probably need to trim the volume afterwards to regain unused space. just setting the refreservation without moving the disk at all would have done the trick as well.
 
Unfortunately that didn't work either :(

Code:
root@pve-lab:~# zfs get all NVMe/vm-901-disk-0 | grep used   
NVMe/vm-901-disk-0  used                  82.5G                  -
NVMe/vm-901-disk-0  usedbysnapshots       0B                     -
NVMe/vm-901-disk-0  usedbydataset         12.9G                  -
NVMe/vm-901-disk-0  usedbychildren        0B                     -
NVMe/vm-901-disk-0  usedbyrefreservation  69.6G                  -
NVMe/vm-901-disk-0  logicalused           12.8G                  -

after `fstrim /` inside the VM, it's still the same
 
I've set it to sparse 1 now, and migrate the disks from that datastore to another and then back, so it's a new disk, right?
(zfs set reservation=0 NVMe/vm-540-disk-1 doesn't change anything)

The correct command is "zfs set refreservation=0G NVMe/vm-901-disk-0"

my bad
*facepalm*
 
  • Like
Reactions: EagleTG and kocio

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!