wearout indicator

gijsbert

Active Member
Oct 13, 2008
47
3
28
We are running proxmox on intel ssd's for over a year now. Although there is not very much disk activity, the wearout indicator is still 0%, even after using it for about a year now and I'm wondering if this is correct? Attached are two screenshots of what we see in the GUI of proxmox. Can anyone confirm the wearout indiciator is indeed 0% using Intel SSD's or does Intel have another smart ID when it comes down to wearout indicator?

Thanks in advance for any reply,

GijsbertSchermafbeelding 2018-09-28 om 12.35.53.png Schermafbeelding 2018-09-28 om 12.38.56.png
 
we get the information directly from the ssd via smartcl
so unless intel changed the field number for their ssds, the 'media wearout indicator' show 0% wearout
 
In one of our clusters, we also use Intel DC 3520 - I also thought there is an issue but now I see a 1 % wear-out in one of my OSDs. See screenshot.

2018-Intel-DC-3520-wearout.png
 
Just to add, this Ceph cluster is running for about a year, but we do not write that much on this one.
 
Hi. I've just paid attention that wearout indicator shows incorrect values. When smartctl was shown 10% indicator in pve was shown 0%. Now it's 1% while the real is 11%. See attached pic.
PVE Manager Version is 5.4-15/d0ec33c6

Снимок.JPG
 
Now it's 1% while the real is 11%. See attached pic.
PVE Manager Version is 5.4-15/d0ec33c6
this is fixed since pve 6.1, so i'd recommend upgrading to a current (supported) pve version
 
Just an off-topic reply..
Since moving from ESXi to Proxmox, these wear indicators are -barely- moving for me. And I had these NVME drives in a mirror in an ESXi VM (via passthrough) presented back to the host. When moving to proxmox, I just re-imported the pool. And started using them. In a year, my drives went from 0 to +- 20% wear with hardly any use.

Now in 2 months of proxmox use, they've only gone up 1%. Extrapolate that data; and for 1 year of proxmox use: that's 6% (at worst). With ESXi and the same loads, I was at 20% somehow. That means the lifetime of my NVME drives has been extended by at least 3 times :)

As I log all this smart data to Grafana, I can quickly see trends. Once I moved from ESXi to Proxmox; I noticed that the temperature of my NVME drives was elevated the first couple of weeks, in several "batches" it seems. I suppose this was internal garbage collection in the firmware that was somehow unable to run in ESXi, as ZFS stats didn't show any I/O; but temperature doesn't lie ;) . After those days, all seems normalized.

Bottom line: Hurray for Proxmox :)
 
  • Like
Reactions: bobmc

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!