ZFS on Linux Recommended Optimizations for SSDs vs. Proxmox? Looking for Advice on Impact of Tweaking Pool Settings

Sep 1, 2022
255
48
33
40
Hello,

I found a guide for ZFS generally (not Proxmox) focused on extending the life of SSDs being used with ZFS. Most of them aren't implemented on rpool in the default PVE install, so it got me curious.

One is: Swap is disabled.
Beyond that, these are the settings I'm curious about:

Enable TRIM on PVE host node and each VM.
Code:
zpool set autotrim=on $PoolName
As far as each VM, this makes perfect sense. I've seen VM virtual disks where TRIM isn't enabled be treated like spinning rust by the virtualized OS.

On the PVE node itself, I seem to recall there being a reason this isn't the default in PVE, but I can't find it now. Like, we don't have to autotrim becuase something else is doing that in place of ZFS itself?

Should this be on or off?
EDIT: I knew I'd posted about this before. Leave it off, but there might be some additional tweaks needed.
See: https://forum.proxmox.com/threads/p...uilt-in-trim-cron-job-vs-zfs-autotrim.114943/

Disable atime and relatime attributes.
This is to minimize writes, as otherwise these have to be updated on disk every time a file is accessed.

How much of an impact does this really have? Especially in a home network/home server environment? When would I regret not having atime enabled?

Change Extended Attribute Storage to Use inodes to Minimize Writes
Default,
Code:
xattr=on
, uses hidden subdirectories; may need multiple lookups when accessing a file.
Alternative,
Code:
xattr=sa
, uses inodes

dnodesize=auto apparently should also be set for for non-root datasets when changing xattr this way.

a. Apparently, inodes are the intended way to store this data in ZFS on Linux (ZoL)?
b. Creates less I/O requests when extended attributes in use for, e.g., SMB?

Correct ACL Type
Default: off
Allows use of posix acl (getfacl, setfacl).
Store additional access rights (per user and/or per group) on files and directories
Adds to the regular access rights set through chmod/chown
Corrected setting:
Code:
acltype=posixacl

I don't really understand this one well enough to understand the implications of what it's doing.
 
Last edited:
If you really want a deep dive, look at the upstream Open-ZFS documentation, e.g. this.

Best to "optimize for SSD" is to NOT BUY non-enterprise SSDs. This cannot be said enough and most post in this forum about ZFS and SSD performance and problems are for non-enterprise SSDs. I tried it out myself and it is TERRIBLE indeed. Even with Samsung Pros ... which are on the better end of non-enterprise SSDs. Just buy enterprise SSDs and don't care about trimming or reduced live. I've been running Samsung SSDs literally over a decades without regular trim and they just work fine and uptimes in the 2-digit years.

I bought over 30 used enterprise SSDs over the years and they're all still running fine with >95% health and many, many TBW.
 
If you really want a deep dive, look at the upstream Open-ZFS documentation, e.g. this.

Best to "optimize for SSD" is to NOT BUY non-enterprise SSDs. This cannot be said enough and most post in this forum about ZFS and SSD performance and problems are for non-enterprise SSDs. I tried it out myself and it is TERRIBLE indeed. Even with Samsung Pros ... which are on the better end of non-enterprise SSDs. Just buy enterprise SSDs and don't care about trimming or reduced live. I've been running Samsung SSDs literally over a decades without regular trim and they just work fine and uptimes in the 2-digit years.

I bought over 30 used enterprise SSDs over the years and they're all still running fine with >95% health and many, many TBW.
I'm running enterprise SATA disks in this box. Both of them are showing 99 percent endurance remaining, and their endurance is measured in PB. :)

My other box will be using enterprise SAS2 disks for mass storage.

Endurance aside, are there measurable performance benefits to this sort of optimization?
 
Hi,

IMO, I am 100% agree with @LnxBil. Also to enable trim in any VM/PMX host is not so beter, because for any block who is deleted you will also send a TRIM command, so it will impact on performance(think that you delete a big file ...). It is much better to make a schedule(daily for example), who is running during the night, when your load is less.

Good luck / Bafta!
 
Yeah, I have a trim cronjob, because it trims slog and special device, too. 2 crons, one for each pool.
 
Yeah, I have a trim cronjob, because it trims slog and special device, too. 2 crons, one for each pool.
Host is not so important, yet VMs are. Mostly for freeing space in your pool occupied by deleted files.

Endurance aside, are there measurable performance benefits to this sort of optimization?
Depends on your workload, raidz-setting, etc. There are no knobs to turn and everything runs 10 times faster. If there were any, they were already turned and they would not be needed in the first place. Everything else falls into the "it depends" category, mostly analysing what you want and optimize for. There are application specific optimizations, e.g. for postgres.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!