Filesystem choice for consumer SSD

jlbbb

Member
Aug 10, 2023
5
0
6
Hello everyone,

I am in the process of building a Proxmox node, it will be a single node, no RAID, no HA, just backups to a PBS.

Sadly I am forced to use consumer SSDs, which I already bought, they will be Crucial MX500 1 TB and I am trying to extend their life span as much as I can.

I was considering zvol or lvm-thin for block devices, would anyone be able to help me out estimate which one would be more forgiving on my SSD?

Zvol will be tuned, no atime, no async, longer txg timeout, proper ashift (I'd say 12?), I have no idea on how to tune lvm-thin, VMs will run on XFS, possibly properly tuned.

Another option would be raw files, but that would be XFS + raw + XFS, I am not sure it would be beneficial.

The TL;DR is basically: on crappy and ill advised hardware, would you rather do LVM-Thin or decently tund ZVOL to extend the SSD's lifespan?

Thanks in advance!
 
While I am definitely a ZFS fan there might be situations where LVM (-thin) is the recommended choice. Sadly this might be that rare occasion...
 
I would plan on replacing the consumer-level drives long-term. Definitely keep an eye on the wearout indicator.

For speed you could try lvm-thin (allows snapshots) or XFS (faster than ext4) - and if you need snapshots on that, backup to PBS and take advantage of dedup.
 
  • Like
Reactions: Johannes S
While I am definitely a ZFS fan there might be situations where LVM (-thin) is the recommended choice. Sadly this might be that rare occasion...
The only reason coming to my mind would be an absurdly low amount of RAM. Anything else?
 
The only reason coming to my mind would be an absurdly low amount of RAM. Anything else?
Well, the usual reasons against ZFS are a) some higher wearout = the SSD won't last as long as with LVM/ext4 and b) slower performance because of more write-activity. (And no PLP which would allow delaying the actual write...)

The required amount of RAM for ZFS is not as high as very often stated in the past. Especially the old rule of "1 GB Ram per 1 TB of disk space" is bogus. But I have not tested a Raspberry with just 1 GiB Ram with ZFS yet ;-) (Also: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage)

Of course I work hard to have the goodies of ZFS wherever I can: https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/
 
  • Like
Reactions: Johannes S
While I am definitely a ZFS fan there might be situations where LVM (-thin) is the recommended choice. Sadly this might be that rare occasion...
Hello and thanks for your reply, is there a real advantage though? Over tuned ZVOL I mean, I see that WAF could be severely reduced by turning some knobs in ZVOL config, is that true? Because it mostly comes from (I suspect) AI generated content, I have no first hand experience with this...

I would plan on replacing the consumer-level drives long-term. Definitely keep an eye on the wearout indicator.

For speed you could try lvm-thin (allows snapshots) or XFS (faster than ext4) - and if you need snapshots on that, backup to PBS and take advantage of dedup.
Hello and thanks for your reply, I care about SSD endurance more than speed, speed is not really an issue.
 
I often hear that ZFS/PVE is killing SSDs but as of yet I could not verify this myself and no one got back to me with converning iotop-c values either.
I'd recommend ZFS too. You can check the amount of writes like this: https://gist.github.com/Impact123/3dbd7e0ddaf47c5539708a9cbcaab9e3#io-debugging
Hello and thanks for your reply.

I hear that too, one person on youtube apparently debunked this (for reference: https://www.youtube.com/watch?v=V7V3kmJDHTA ) but the common knowledge remains: ZFS kills SSD... I can't quite explain this, hard data seems to show it doesn't as much as one would think, do you happen to use it on consumer stuff? Do you have any first hand experience yourself? That would be very valuable
 
I have a few different nodes for different things. On my main cluster (and whenever possible) I use DC drives. I listed the DC drives I use here if you're curious.
On my NAS/PBS node I use a 256GB Samsung PM981 consumer NVMe. I'm also in the process of testing a 256GB Western Digital SN 530 consumer NVMe at the moment due to a discussion with someone about this very thing.
I haven't gotten around to configure a lot of services for it yet though so this data isn't very valuable. Right now it just runs a idle HAOS and debian VM. This has been accumulating for 30h or so.
1772461964761.pngOverall my services aren't very write heavy on the boot disk but I had no real issues as far as amount of writes/health is concerned. Like I said, I'd recommend you test yourself with your workload.
 
Last edited:
  • Like
Reactions: Johannes S