How to Trim ZFS Pool?

ga_lewis

New Member
Apr 24, 2023
10
0
1
Hi All

I have setup a single disk (2TB) ZFS pool (called MediaDrive) and bind mounted it to a container which shares it as a samba share with the same name (MediaDrive)

When I delete files from the MediaDrive the used space does not seem to reduce. For example, in the below screenshot I have copied and then deleted from the drive 40GB of data but, as you can see, it still says I am using 40GB.

How do I trim / reclaim this space? Preferably automatically!

(I tried a manual trim in the pve shell using zpool trim MediaDrive but it says cannot trim: no devices in pool support trim operations
Thank you!!

1720685350379.png
 
Just to clearify: You created a zpool on a single disk 2TB and created a dataset that you bind-mounted to a container running samba. You don't need to trim (and trim will not help) on a dataset with files. This only helps on zvols.

Have you checked the used space after copying your data?
Are there snapshots presents?
Please post the output of zpool list and zfs list in code tags.
 
Thanks @LnxBil

I think this was my silly mistake. There was a hidden .recycle folder. When I deleted the contents from that the drive usage returned to zero. Apologies for any inconvenience and thanks for your help! So I think I all good.

One question though if I can: confirming that yes, I created a zpool on a single disk 2TB and created a dataset that I bind-mounted to a container running samba. Is this an ok way to setup a samba share?

FYI

Code:
root@proxmox:~# zpool list
NAME              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
BackupDrive       928G  73.0G   855G        -         -     0%     7%  1.00x    ONLINE  -
MediaDrive       1.81T  1.29M  1.81T        -         -     0%     0%  1.00x    ONLINE  -
MediaDrive2       464G   612K   464G        -         -     0%     0%  1.00x    ONLINE  -
VirtualMachines   444G  89.5G   355G        -         -     1%    20%  1.00x    ONLINE  -




Code:
root@proxmox:~# zfs list
NAME                                USED  AVAIL  REFER  MOUNTPOINT
BackupDrive                        73.0G   826G  72.1G  /BackupDrive
BackupDrive/subvol-100-disk-0       833M  7.19G   833M  /BackupDrive/subvol-100-disk-0
MediaDrive                         1.29M  1.76T   104K  /MediaDrive
MediaDrive2                         612K   450G    96K  /MediaDrive2
VirtualMachines                     274G   157G   128K  /VirtualMachines
VirtualMachines/subvol-101-disk-0  5.55G  2.45G  5.55G  /VirtualMachines/subvol-101-disk-0
VirtualMachines/vm-200-disk-0       134G   272G  19.0G  -
VirtualMachines/vm-201-disk-0      32.5G   178G  10.8G  -
VirtualMachines/vm-202-disk-0         3M   157G   100K  -
VirtualMachines/vm-202-disk-1       102G   204G  54.1G  -
VirtualMachines/vm-202-disk-2         3M   157G    64K
 
This is ok.

Yet I have only ONE zpool with all my disks, so that I have the maximum speed and capacity (and redunancy).

Thanks - I didn't think I could create one zpool with different size disks. Is that correct? Hence I created separate zpools.

BackupDrive is mirrored 1TB sata drives.
VirtualMachines is mirrored 500GB SSDs
the MediaDrives are single disk zpools
 
Wouldn't make sense in your case. You probably don't want your backup drive to be part of that combined pool so you don't lose your backups and VMs at the same time when the pool fails.
And it would be possible to stripe your mirrored "VirtualMachines" and single disk "MediaDrives" for 1.5TB of storage and doubled performance but then you will lose all your VMs in case there is a problem with your "MediaDrives" disk. So bad idea because then mirroring the "VirtualMachines" disks would be useless.
 
Last edited:
  • Like
Reactions: ga_lewis
Wouldn't make sense in your case. You probably don't want your backup drive to be part of that combined pool so you don't lose your backups and VMs at the same time when the pool fails.
And it would be possible to stripe your mirrored "VirtualMachines" and single disk "MediaDrives" for 1.5TB of storage and doubled performance but then you will lose all your VMs in case there is a problem with your "MediaDrives" disk. So bad idea because because then mirroring the "VirtualMachines" disks would be useless.
Thank you @Dunuin !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!