SSD wear with ZFS compared to BTRFS

ptersilie

New Member
Jan 19, 2026
4
2
3
Hi folks,

I'm looking to use Proxmox on my home server and I'm struggling to make a decision on whether to use ZFS or btrfs or something else entirely. My home server is mostly used to backup photos (immich) and run a few services (jellyfin, paperless, home assistant, etc).

I thought I had made up my mind to use a ZFS RAID1 mirror but then I started reading about SSD wear which now has me worried. I have two 4TB consumer drives:

- Crucial P3 Plus SSD 4TB M.2 NVMe
- Samsung SSD 870 EVO 4TB

I was planning to run the Proxmox OS on the mirror as well to decrease downtime should a drive fail. But I was reading that Proxmox is quite write-heavy (logs, etc) which is then amplified by ZFS. I now know that it is thus suggested to run enterprise SSDs, but I have to work with what I got now. My server also only has 2 drive slots so I can't run Proxmox off of a separate disk either, unfortunately.

Am I worrying too much? Are there things I can do to mitigate SSD wear? Is BTRFS a good alternative (I know it's still experimental but RAID1 appears to be stable)? Any suggestions are very appreciated! Thanks!
 
Rather than reaching TBW, the slower synchronous I/O and resulting lack of performance is a greater concern.

Thanks for your reply! I wasn't really too worried about I/O performance as I imagined the bottleneck when streaming photos, documents, videos is more the WIFI connection rather than the hard drives.

Wouldn't LVM be an option?

Before you mentioned it, I didn't realise you can do RAID1 with LVM as well, so I will definitely look into it. What features would I be missing out on compared to ZFS and BTRFS?
 
Thanks. So does RAID1 on LVM always use mdraid, which the link above says is unsupported, or is there another supported way to do LVM RAID1?

Also, do you guys have thoughts on BTRFS, which seems like the right choice for me. Write amplification isn't as big as on ZFS, it's generally is a bit easier on the requirements, yet it still gives me error correction, snapshots, etc. Although one thing I kept reading when looking into BTRFS is that if a disk dies you can't boot into Proxmox OS anymore, which kinda defeats the purpose of RAID1. Though these were old threads and I'm not sure if this is still accurate.
 
Last edited:
Depending on the amount of data which is acceptable to be lost you can also increase the timeout for TXG writes.
The default is 5 seconds.
E.g. to increase it to 10 seconds addoptions zfs zfs_txg_timeout=10 to /etc/modprobe.d/zfs.conf.
 
Depending on the amount of data which is acceptable to be lost you can also increase the timeout for TXG writes.
Given I currently manually do an rsync to another drive, anything would be better than that. :D I don't really write a lot of data and photos/documents are manually backed up to Backblaze every so often. I don't think I have data that needs to be synced immediately. Even 10 seconds might be overkill for my usecase.

I won't fight this. This discussion is interesting, though I did not verify their statements: https://github.com/kdave/btrfs-progs/issues/760

Hmm, I've read that btrfs has an SSD mode to reduce wear (hence my uneducated statement above that it's better for SSD wear than ZFS), but this thread seems to contradict that. There's a lot to process here.
 
  • Like
Reactions: Johannes S and UdoB