ZFS trim and over-provisioning support

eth

Renowned Member
Feb 24, 2016
69
3
73
38
Hi!

I want to install proxmox on a group of SSD disks. I have a few questions.
  1. Does ZFS Raid support TRIM in Proxmox?
  2. I want to pool my 4 SSD drives into a Raid 0 pool. Is it possible to do "over-provisioning" (partition the space lower than the actual size of the drives)? From what I saw, the default installation method gives all of the pool's capacity to the proxmox node.
I want to be able to install my Virtual Machines directly on the hard drive and not into a file on the proxmox node. And I wanted to leave 10% of the pool's capacity unpartitioned to do "over-provisioning".
 
> 1. Does ZFS Raid support TRIM in Proxmox?
Yes
> 2 I want to pool my 4 SSD drives into a Raid 0 pool. Is it possible to do "over-provisioning" (partition the space lower than the actual size of the drives)? From what I saw, the default installation method gives all of the pool's capacity to the proxmox node.
Not directly how you describe... zfs does all these things for you.

> And I wanted to leave 10% of the pool's capacity unpartitioned to do "over-provisioning".
Thats not needed. ZFS use always the whole disk.

First: Bestway is to use 2 SSDs only for the OS. We do this alway with two Samsung 750EVO. No Enterprise needed for OS on ZFS. For your Raid: Raid0 is never a good idea, not for production, an really i must say, also not for testing: So test something important. Turning the test the raid damage and everything is gone away ;(

To your over-provisioning: Yes this works out of the box. (set the checkbox on storagetab) Have enabled it on all us servers. When you use ZFS, virtual machines are never stored directly on the filesystem, they are using the zvol-feature. Thats a big vantage from zfs.
When you make your zfs-pool use always the whole disks for it. All things what you describe you can manage with zfs and zpool. This is the zfs way. No partitions, no mountpoints in fstab.

https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks
https://pve.proxmox.com/wiki/Storage:_ZFS

One thing at the end: ZFS need more RAM then an normal os, and ECC is very recommended. So for virtualisations 32GB+.
 
No, ZFS on Linux (ZOL) does not support TRIM.

Yep, that's what I've read by now as well.

Do you think that making a cron job that would issue a TRIM command directly to the disks that run the ZFS Raid every week would solve the problem?
 
>To your over-provisioning: Yes this works out of the box. (set the checkbox on storagetab)

When installing I can only select ZFS Raid type and what disks to use. I can't select how much disk space should be used (I assume that the installer uses the whole pool). Or are you talking about something else here?
 
No, ZFS on Linux (ZOL) does not support TRIM.
No? ... ok, so it is right when i say it is different from dataset/zvol?

When i create an dataset, and write 5000MB on it (SSDraid), then i delete this, after erasure process the storagespace is free again. Never tested it on VM's (trim on Ubuntu and Windows run's automaticly).
 
When installing I can only select ZFS Raid type and what disks to use. I can't select how much disk space should be used (I assume that the installer uses the whole pool). Or are you talking about something else here?
Yes this is ok, after that you can create datasets.
 
Yep, that's what I've read by now as well.

Do you think that making a cron job that would issue a TRIM command directly to the disks that run the ZFS Raid every week would solve the problem?
It will not help since it is the filesystem which is responsible for notifying the disk controller when a block has been marked free and since TRIM/unmap is a noop in ZOL this is not going to happen.
 
No? ... ok, so it is right when i say it is different from dataset/zvol?
It does not make any difference whether it is a dataset or zvol.
When i create an dataset, and write 5000MB on it (SSDraid), then i delete this, after erasure process the storagespace is free again. Never tested it on VM's (trim on Ubuntu and Windows run's automaticly).
This is on the filesystem level. The blocks as fare as the disk controller is concerned is still not free.
 
This is on the filesystem level. The blocks as fare as the disk controller is concerned is still not free.
It this the same for lxc? Lxc is drectly on zfs no as VM. So trim works on that?
 
Maybe there is some misunderstanding @fireon:

DISKS --(1)-- ZFS Pool -- ZVOL (2) KVM/Qemu VM
\-- ZFS (3) e.g. LX(C) Container

1 does not support trim in ZoL yet,
2 does support trim if VM is setup correctly
3 does trimming (freeing up space in ZFS) automatically on file deletion if file is not part of a snapshot
 
Maybe there is some misunderstanding @fireon:

DISKS --(1)-- ZFS Pool -- ZVOL (2) KVM/Qemu VM
\-- ZFS (3) e.g. LX(C) Container

1 does not support trim in ZoL yet,
2 does support trim if VM is setup correctly
3 does trimming (freeing up space in ZFS) automatically on file deletion if file is not part of a snapshot
That is still all at the file system level, no trimming is done at the disk level so it is wrong to talk about trim in the defined context. Trim means adjust usage at the disk level mark marking disk blocks free for the controllers usage to store new data.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!