How to Reduce SSD Wearout in Proxmox?

verishare

New Member
Oct 14, 2025
2
0
1
Hi everyone,

I've been running Proxmox on a system with SSDs and recently noticed increased wearout on one of the disks. I'm a bit concerned about the longevity of the drive, and I’d like to optimize my setup to reduce unnecessary writes and prolong the SSD's lifespan.

I've attached a screenshot showing the current wear level for reference.

Here are some details about my setup:
  • Proxmox v9.0.10
  • SSD is being used for both OS and VM storage
  • 1 container & 3 active VM running.
  • No dedicated wear leveling or over-provisioning set up
  • TRIM is not confirmed to be active
I would appreciate any suggestions or best practices on how to reduce SSD wearout in a Proxmox environment. Specifically:
  • Are there any Proxmox-level or Linux-level tweaks I should consider?
  • Is enabling TRIM necessary, and how can I verify if it’s active?
  • Should I move certain data (e.g., logs, swap) to a different storage device?
  • Any recommended filesystem settings or caching strategies?

Thanks in advance for your help!
 

Attachments

  • proxmox-wearout.PNG
    proxmox-wearout.PNG
    17.3 KB · Views: 17
hi, it looks like your SSD is handling both OS and VM I/O, which accelerates wear.

Check if TRIM is active (fstrim -v /), and enable discard on LVM/VM disks to free unused blocks.

Move logs, swap, and RRD data off the SSD or into tmpfs/ramdisk to cut small writes.

If using ZFS, tune sync writes and ARC size; for LVM, consider adding over-provisioning.

Separating system and VM storage will make the biggest difference long-term.

Hope this will help extend your SSD lifespan by a bit more.
 
on how to reduce SSD wearout

You can decrease the wearout per time if you use a device with PLP - actually writing data to the hardware cells is going to happen more infrequent. Unfortunately the A400 series lacks this feature.

Additionally PLP will increase the performance of synchrone writes drastically.
 
Thanks for all the suggestions so far. I also have an additional piece of hardware I want to include in my setup, and I’m hoping to get advice on how best to integrate it with minimal SSD wear.

I have an empty 1 TB WD Blue HDD (SATA) available, and I would like to use it in my Proxmox setup. My goal is to offload “noisy” (in terms of write cycles) workloads from the SSD as much as possible.

This is now my homelab specs:
CPU: Intel Core i5-3570 CPU @ 3.40GHz (1 Socket)
RAM: 32GB DDR3 (Max)
Storage: 1 x Kingston 480GB (SA400S37/480G) SDD & 1 x WD Blue 1TB HDD (7200 rpm)

Question: Which file system should I use on the HDD?
I’m unsure whether I should format the HDD using ZFS, or use something simpler like ext4 or xfs.
  • Would ZFS be too heavy for a single 1TB spinning HDD with no redundancy?
  • Are there advantages (like compression, snapshots) that still make ZFS worthwhile in this setup?
  • Or is ext4/xfs more appropriate here due to lower overhead and simplicity?
If someone more experienced with Proxmox + hybrid SSD/HDD setups can also advise on:
  • How to structure storage in this kind of hybrid setup
  • Whether mixing SSD and HDD in LVM or ZFS pools is advisable
  • Any useful Proxmox or Linux tuning parameters to further reduce SSD wear
Thanks again, looking forward to your suggestions. Any advice or experience would be appreciated!

- New Proxmox User
 
Here are just some of my own thoughts for your reference.

I would, stick with ext4 on the WD Blue HDD — ZFS adds extra RAM and CPU load with little gain on a single disk. Keep SSD for OS and active VMs, and use the HDD for logs, swap, backups, or low-priority containers.

Don’t mix SSD + HDD in one ZFS or LVM pool; separate them to avoid write amplification. Mount HDD partitions with noatime and data=writeback to minimize extra writes.

This layout gives you (in my opinion) the best balance of performance, simplicity, and SSD longevity.

As I've only ran enterprise critical workloads, I will always use enterprise grade SSD and NVMe which are more costly, but however they can run for a long time. We've got SSDs that are running for more than 5 years now with just 10% wear.