Proxmox HA: Mirrored boot SSD disk wearout

edd

New Member
Sep 19, 2025
22
1
3
How many TB/year should you assume Proxmox HA writes to mirrored boot SSD disk?

Does logging to a remote syslog server or using log2ram materially change this?

I pondering if there is a rule of thumb on how long you should expect your boot disks to last given a disks TBW durability.
 
Not sure if thats completely correct but if you only use the mirror for pve, no vm's or other stuff then on a simple setup expect at least 10GB / day for logs.
HA needs some more writes but that also depends on your configuration.

In a homelab with lvm thin / ext4 you should not have to worry too much about TBW but if you have to budget you could always get some (maybe used?) enterprise ssd's with PLP. They often have more TBW and better caching (with help of PLP) to migitate the wearout better.

Maybe this reddit post helps
 
Not sure if thats completely correct but if you only use the mirror for pve, no vm's or other stuff then on a simple setup expect at least 10GB / day for logs.
That seems... unremarkable, given the amount of kvetching on the subject. Even the cheapest SSD I could find has 40 TBW durability, which should last for a few years, even with some write amplification.

I was a bit worried since older enterprise SSDs have a TBW in the low to mid hundreds and that's equivalent with todays consumer SSDs. Apparently those are still fine.

HA needs some more writes but that also depends on your configuration.

Maybe this reddit post helps
The reddit post seems to indicate that 10 GB / day was with HA enabled. Did I understand it wrong?
 
Maybe frontend IOPS, but backend IOPS may be much more if this is a consumer SSD.
Does not write amplification affect consumer and enterprise SSDs alike? Obviously entreprise SSDs have more TBW, but still.

How large is the write amplification factor? Are the Proxmox HA default settings sane, or do you need to tune ZFS parameters according to your choice of storage?