ssd wear

  1. H

    Proxmox on ZFS - Should I be worried ?

    Hi, I self-host Proxmox on a dedicated server, running on 2 SSDs in ZFS mirror + 2 hard drive with independant pool. My SMART results on the SSDs start to worry me a bit, and I'm thinking about ditching ZFS. Some recap: It seems to me the "Power_On_Hours" is incorrect. This server has...
  2. R

    Disk overview: Wearout percentage shows 0%, IPMI shows 17% ...

    Hi, we are running an older Proxmox Ceph cluster here and I am currently looking through the disks. So the OS disks have a Waerout of two percent but the Ceph OSDs still have 0%?!?!?!? So I looked into the Lenovo XClarity Controller: So for the OS disks it looks the same, but the Ceph...
  3. A

    Reason PVE host keep write data to disk?

    I have single PVE server (6.4, community repo) and I noticed It has constant iodelay > 0 even when host is almost idle. The host is very low loaded, so not expected to see that. There is single m2 NVMe disk on board (Samsung 970 EVO Plus) installed mostly as a test disk, and couple of Intel 545...
  4. U

    Minimizing SSD wear through PVE configuration changes

    I'm configuring my new ProxMox server and want to reduce unnecessary wear on my SSD root mirror. I've done a lot of searches and although it's easy enough to find some hints, I can't find a comprehensive tutorial to achieve this. What have a done so far: redirected /var/log to a separate HDD...