[SOLVED] ceph disk wearout

RobFantini

Famous Member
May 24, 2012
2,042
111
133
Boston,Mass
I have 18 ceph osd's ,

all show 0% wearout at the disks screen for three pve hosts.

Question: Is that normal?



PVE Manager Version : pve-manager/4.4-15/7599e35a
 
this ceph system has been in operation around 5 months. ceph is used mainly for accounting/business and owncloud applications. owncloud at the most gets 10MB of changes per day. accounting 500MB on a huge day, normally 100MB of changes to files.

the ssd's are Intel SSD DC S3520 . 18 of them 480GB each.
 
looks normal - but you can check the smart values to see the amount to the total written data ("Total_LBAs_Written", multiple with 32 and you have the total megabytes).

compare this with the values of Intel datasheet (see TBW):
https://ark.intel.com/products/93026/Intel-SSD-DC-S3520-Series-480GB-2_5in-SATA-6Gbs-3D1-MLC

FYI, we ran a small 3 nodes with 3 x 4 OSD setup since half a year with Intel SSD DC S3520 1.2 TB and we are also on 0 % (0,3 to be exactly). We have about 30 VMs and a few containers with standard office workload.
 
  • Like
Reactions: RobFantini
OK thanks for that.

so I checked a couple of drives and Total_LBAs_Written is about 250,000 .

at intel site: Endurance Rating (Lifetime Writes) = 945 TBW . that is 967,680 gigabytes.

I am a little confused on how to do the math.

do I divide 32 into 250,000 ? I assume so <<< Question

so 250,000 / 32 = 7,813 MB <<

or is it 250,000 x 32 ?
 
250.000 * 32 = 8000000 (8 TB)

this is below 1 % of 945 TB