wearout

  1. R

    What is the impact of PVE hard disk wearout 100%?

    What is the impact of PVE hard disk wearout 100%? How is the wearout value calculated and is there any basis for it?
  2. P

    SSD wear

    Hi, I've been reading about others facing a similar issue but I wanted to share mine and see if there is any solution to it. I could not find a solution that I would understand or implement so far... Sorry if it's obvious but please help! I run proxmox for a while now. In Aug 2022 I bought two...
  3. K

    High data unites written / SSD wearout in Proxmox

    Hi everyone, Happy new year :) I have begun to see a disturbing trend in both my Proxmox VE nodes, that the M2 disks are wearing out rather fast. Both nodes are identitical in the terms of hardware and configuration. 6.2.16-12-pve 2 x Samsung SSD 980 Pro 2TB (Only one in use on each node for...
  4. J

    [SOLVED] Really High SSD Wearout - Samsung 990 PRO

    Hello everyone, Some months ago I set up a Intel NUC 13 (i5 13th gen, 64 GB) as homelab to run 9 little VMs. The SSD i mounted is a 2TB Samsung 990 PRO. Space used on the SSD is 107 GB out of 1.84TB and the vms are 8 Debian and 1 Ubuntu doing really little things (web servers which are used...
  5. P

    Rapid SSD wear-out ZFS RAID1

    Goodevening, I have a question according to my Proxmox setup, I have a 128GB RAM server with 2X 2TB SSD's in RAID1 (ZFS) configured. I saw that the wear out is around 10% in 34 days, I was researching on the internet and find a possible solution to adjust the ZFS settings like: recordsize to...
  6. M

    Add another drive to VMdata

    I am running a small server with only 2 drives. 1x60GB for the Proxmox installation, and 1x500GB for all the VM data. The 500 disk, is reporting 81% wearout, and I have begun to experience issues the last couple of days, which I suspect has to do with this wearout. Please correct me if that is...
  7. R

    Disk overview: Wearout percentage shows 0%, IPMI shows 17% ...

    Hi, we are running an older Proxmox Ceph cluster here and I am currently looking through the disks. So the OS disks have a Waerout of two percent but the Ceph OSDs still have 0%?!?!?!? So I looked into the Lenovo XClarity Controller: So for the OS disks it looks the same, but the Ceph...
  8. A

    Excessive writes to NVMe on ZFS

    Hi Guys I'm running Proxmox 6.4.13 and recently installed a Corsair MP600 1TB NVMe using a PCIe riser card. The NVMe is set up using ZFS (Single Disk, Compression On, ashift 12) I am seeing a concerning amount of writes and I do not know why. I am not running any serious workloads. Just...
  9. G

    Incorrect NVMe SSD wearout displayed by Proxmox 6

    I have recently installed four NVMe SSDs in a Proxmox 6 server as a RAIDZ array, only to discover that according to the web interface two of the drives exhibit huge wearout only after a few weeks of use: Since these are among the highest endurance consumer SSDs with 1665 TBW warranty for a...