Having started with Proxmox VE for my home / smb environment very enthusiastically on old consumer hardware (the clustering feature is AWESOME, (live) migrating VM's and containers is just pure MAGIC), I'm now forced to reconsider as (presumably) ZFS is killing my SSD(s).
Current situation is a 3-node PVE 'cluster' with each node containing a single SSD (varying between 120GB and 1TB). I chose ZFS when installing the nodes, because of the presumed advantages of being able to use snapshots, ease of expandability etc.
The setup actually runs pretty well for my use case (about 6 VM's / containers currently), however the 250GB Samsung EVO SSD on my first node already has a wear level count of 22% (of which the last 2% was in a matter of weeks). So it seems obvious the current setup will kill the SSD(s) pretty quickly.
Question is, should I start over completely or can I gradually fix/reconfigure (or reinstall) each node without killing the cluster?
Unfortunately, I don't have the budget to buy enterprise-grade hardware, so any config suggestions which would make my consumer SSD's last longer in a small PVE cluster would be very welcome.
Current situation is a 3-node PVE 'cluster' with each node containing a single SSD (varying between 120GB and 1TB). I chose ZFS when installing the nodes, because of the presumed advantages of being able to use snapshots, ease of expandability etc.
The setup actually runs pretty well for my use case (about 6 VM's / containers currently), however the 250GB Samsung EVO SSD on my first node already has a wear level count of 22% (of which the last 2% was in a matter of weeks). So it seems obvious the current setup will kill the SSD(s) pretty quickly.
Question is, should I start over completely or can I gradually fix/reconfigure (or reinstall) each node without killing the cluster?
Unfortunately, I don't have the budget to buy enterprise-grade hardware, so any config suggestions which would make my consumer SSD's last longer in a small PVE cluster would be very welcome.