I have 6 nodes in my Proxmox cluster which are exclusively Ceph storage nodes (no VMs). Each node as a pair of Samsung 860 Pro 256G SATA SSD cards with the OS installed on the drives as mirrored zfs. These have been in operation for about 5 years. I have noticed the SSD wearout indicator for both drives on all nodes ranges from 60 to 70%.
Incidentally, my compute nodes where the OS is installed on a single Intel 480GB SATA SSD (SSDSC2KB480G8) and some VMs on local-lvm all sit anywhere from 1 - 2% wearout.
Why are the Ceph nodes SSDs being hit so much, and should I worry that the wearout is so high? Is there something inherent about the ZFS mirrored boot drive config that's hitting these drives so hard? Is this normal or abby-normal? Right now I will just keep an eye and plan replacements when necessary.
Incidentally, my compute nodes where the OS is installed on a single Intel 480GB SATA SSD (SSDSC2KB480G8) and some VMs on local-lvm all sit anywhere from 1 - 2% wearout.
Why are the Ceph nodes SSDs being hit so much, and should I worry that the wearout is so high? Is there something inherent about the ZFS mirrored boot drive config that's hitting these drives so hard? Is this normal or abby-normal? Right now I will just keep an eye and plan replacements when necessary.