wearout disk ssd

  1. R

    What is the impact of PVE hard disk wearout 100%?

    What is the impact of PVE hard disk wearout 100%? How is the wearout value calculated and is there any basis for it?
  2. P

    SSD wear

    Hi, I've been reading about others facing a similar issue but I wanted to share mine and see if there is any solution to it. I could not find a solution that I would understand or implement so far... Sorry if it's obvious but please help! I run proxmox for a while now. In Aug 2022 I bought two...
  3. K

    High data unites written / SSD wearout in Proxmox

    Hi everyone, Happy new year :) I have begun to see a disturbing trend in both my Proxmox VE nodes, that the M2 disks are wearing out rather fast. Both nodes are identitical in the terms of hardware and configuration. 6.2.16-12-pve 2 x Samsung SSD 980 Pro 2TB (Only one in use on each node for...
  4. J

    [SOLVED] Really High SSD Wearout - Samsung 990 PRO

    Hello everyone, Some months ago I set up a Intel NUC 13 (i5 13th gen, 64 GB) as homelab to run 9 little VMs. The SSD i mounted is a 2TB Samsung 990 PRO. Space used on the SSD is 107 GB out of 1.84TB and the vms are 8 Debian and 1 Ubuntu doing really little things (web servers which are used...
  5. U

    ZFS Rpool Schreibzugriffe minimieren

    Hallo zusammen, wir haben einen Server mit 2x 32GB SATA-DOMs auf denen ein ZFS-Mirror läuft als Pool rpool. Dieser wurde durch den Proxmox-7 ISO Installer vor 1,5 Jahren installiert. Die DOMs sind die SSD-DM032-SMCMVN1 die von SuperMicro mit 1DWPD angegeben sind. Nun hat der smartd...
  6. P

    Rapid SSD wear-out ZFS RAID1

    Goodevening, I have a question according to my Proxmox setup, I have a 128GB RAM server with 2X 2TB SSD's in RAID1 (ZFS) configured. I saw that the wear out is around 10% in 34 days, I was researching on the internet and find a possible solution to adjust the ZFS settings like: recordsize to...
  7. H

    Proxmox on ZFS - Should I be worried ?

    Hi, I self-host Proxmox on a dedicated server, running on 2 SSDs in ZFS mirror + 2 hard drive with independant pool. My SMART results on the SSDs start to worry me a bit, and I'm thinking about ditching ZFS. Some recap: It seems to me the "Power_On_Hours" is incorrect. This server has...
  8. Y

    SSD Wearout negative %

    Hello, What does it mean when I see a negative percentage in the Wearout field ? (attached)
  9. ThinkAgain

    SSD Wear Out Calc?

    Not sure if this is the best forum, but I will give it a try: I have two 2TB Samsung 860 Pro SSDs here. Both show appr. the same SMART status: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 010...
  10. TwiX

    CEPH : SSD wearout

    HI, My 'oldest' prx ceph cluster is based on samsung SM863a drives. After 3 years, wearout for some drives is less than "88% remaining". I don't know if these values are safe enough. Under what kind of Wearout value it is recommended to change the SSD drive ? Thanks !
  11. G

    wearout indicator

    We are running proxmox on intel ssd's for over a year now. Although there is not very much disk activity, the wearout indicator is still 0%, even after using it for about a year now and I'm wondering if this is correct? Attached are two screenshots of what we see in the GUI of proxmox. Can...
  12. G

    Reading SSD wearout indicator

    First of all I'd like to say: proxmox rocks :) We are using Seagate XF1230-1A0960 SSD-disks. Using the proxmox GUI I see that the wearout indicator status is N/A (see attached image). However when I check the SMART-status I see attribute 177 (Wear Leveling Count) and these have the following...
  13. F

    ssd disk wearout

    Hello, meaning "WearOut" exactly? On this server I pass from 45% to 49% in three weeks ... the discs have a problem?