SSD Wear

troycarpenter

Renowned Member
Feb 28, 2012
105
9
83
Central Texas
I have 6 nodes in my Proxmox cluster which are exclusively Ceph storage nodes (no VMs). Each node as a pair of Samsung 860 Pro 256G SATA SSD cards with the OS installed on the drives as mirrored zfs. These have been in operation for about 5 years. I have noticed the SSD wearout indicator for both drives on all nodes ranges from 60 to 70%.

Incidentally, my compute nodes where the OS is installed on a single Intel 480GB SATA SSD (SSDSC2KB480G8) and some VMs on local-lvm all sit anywhere from 1 - 2% wearout.

Why are the Ceph nodes SSDs being hit so much, and should I worry that the wearout is so high? Is there something inherent about the ZFS mirrored boot drive config that's hitting these drives so hard? Is this normal or abby-normal? Right now I will just keep an eye and plan replacements when necessary.
 
Do you have atime=off set everywhere on ZFS? Unforch, if you're running a cluster there's a reason Proxmox recommends Enterprise-level SSDs...

' zfs get atime '
 
You are comparing apples to oranges: the intel drive is an enterprise grade disk with 2DWPD and the Samsung ones are consumer grade (and low end endurance wise: 300 TBW in 5 years). Still they've lasted for 5 years without dataloss. Simply replace them with enterprise grade drives and keep enjoying PVE+Ceph.
 
Last edited:
  • Like
Reactions: Kingneutron
Each node as a pair of Samsung 860 Pro 256G SATA SSD cards with the OS installed on the drives as mirrored zfs. These have been in operation for about 5 years. I have noticed the SSD wearout indicator for both drives on all nodes ranges from 60 to 70%.

Incidentally, my compute nodes where the OS is installed on a single Intel 480GB SATA SSD (SSDSC2KB480G8) and some VMs on local-lvm all sit anywhere from 1 - 2% wearout.
That's the difference between a consumer SSD and an enterprise SSD (with PLP and/or much higher write endurance). This is normal and Proxmox is known to destroy most kinds of consumer SSDs but to the logging and graphs and ZFS.
 
Last edited: