I recently finished a "virtualize everything" setup on my home server with Proxmox, hosting a virtualized router (pfSense), pihole, TrueNAS, and a "playground/lab" Ubuntu Server box. I *thought* I had done my due diligence in research while getting everything together for this setup...until I came across a separate thread here today warning AGAINST the use of "consumer" SSD's. Alas. I missed it. I did *exactly* that.
I've had my system up and running now about two weeks, and I noticed my Proxmox system SSD already has 1% wearout, with over 642M LBA's written.
At this rate, it looks like I'll have to replace the thing in two years. Obviously I have plenty of time to plan accordingly, but clearly I should have seen and heeded the original advice of others here about using Enterprise-grade SSD's. Lesson learned. Mea culpa.
The host SSD is a Samsung 870 1TB, and has all my VM's stored on it. I have a second identical one (presently unused) in the system.
With my overall setup pretty stable now, I thought I could image that boot SSD and just stash a copy of it on that drive, leaving it otherwise unused as a backup, so at least that gives me a strategy for doing something smarter over the longer term.
Admitting my error, and realizing I can't undo it immediately, I figure the best I can do right now is identify any defensive steps I can take not to tax that drive any more than necessary. It has no ZFS partitions, only the small BIOS boot partition, the EFI, and a resized LVM partitition that takes the rest of the drive (it didn't seem I needed the LVM-Thin for my initial setup). I was going to use the other SSD for a VM backup (and maybe I still will), but a full image backup almost seems a better idea.
What, if anything, can I do to mitigate the wearout rate on the SSD? Would tweaks to the configuration of the other VMs (TrueNAS comes to mind) offer any defensive measures?
As far as the rest of the config goes (not really sure its relevant, but perhaps something here will trigger some thoughts from the more learned here):
* TrueNAS VM (given 10GB RAM) runs on a 200GB drive on that SSD, and hosts four Toshiba N300 4TB NAS drives in Raid1 config.
* pfSense firewall/router (4GB RAM) and a 50GB drive
* piHole ad whitelisting DNS server, 2GB RAM, 50GB drive
* Ubuntu lab server (4GB RAM), on a 100GB drive
* One 1GB NIC (WAN facing) is set up for hardware passthrough to the pfSense box, while the other (2.5Gb) NIC is bridged to our home LAN and used by all four VM's via virtio
* I've got one brand new WD 3TB RED NAS drive in the box right now (given to me as a Christmas present), but not committed to anything permanently at this point, and a 4TB WD desktop drive I bought just a few months ago purely to expand the storage in my prior setup, but before I decided to rebuild it all, so it, too, is not entirely committed. THere's an old Seagate 2TB drive I'm keeping primarily from the old box in case there are any old files on there I want to pull out of it, but its already way into the 65,000 hour lifespan, so I wouldn't want to commit anything to it
Thanks for taking the time to read and appreciate any input. No brickbats please LOL
-sd
I've had my system up and running now about two weeks, and I noticed my Proxmox system SSD already has 1% wearout, with over 642M LBA's written.
At this rate, it looks like I'll have to replace the thing in two years. Obviously I have plenty of time to plan accordingly, but clearly I should have seen and heeded the original advice of others here about using Enterprise-grade SSD's. Lesson learned. Mea culpa.

With my overall setup pretty stable now, I thought I could image that boot SSD and just stash a copy of it on that drive, leaving it otherwise unused as a backup, so at least that gives me a strategy for doing something smarter over the longer term.
Admitting my error, and realizing I can't undo it immediately, I figure the best I can do right now is identify any defensive steps I can take not to tax that drive any more than necessary. It has no ZFS partitions, only the small BIOS boot partition, the EFI, and a resized LVM partitition that takes the rest of the drive (it didn't seem I needed the LVM-Thin for my initial setup). I was going to use the other SSD for a VM backup (and maybe I still will), but a full image backup almost seems a better idea.
What, if anything, can I do to mitigate the wearout rate on the SSD? Would tweaks to the configuration of the other VMs (TrueNAS comes to mind) offer any defensive measures?
As far as the rest of the config goes (not really sure its relevant, but perhaps something here will trigger some thoughts from the more learned here):
* TrueNAS VM (given 10GB RAM) runs on a 200GB drive on that SSD, and hosts four Toshiba N300 4TB NAS drives in Raid1 config.
* pfSense firewall/router (4GB RAM) and a 50GB drive
* piHole ad whitelisting DNS server, 2GB RAM, 50GB drive
* Ubuntu lab server (4GB RAM), on a 100GB drive
* One 1GB NIC (WAN facing) is set up for hardware passthrough to the pfSense box, while the other (2.5Gb) NIC is bridged to our home LAN and used by all four VM's via virtio
* I've got one brand new WD 3TB RED NAS drive in the box right now (given to me as a Christmas present), but not committed to anything permanently at this point, and a 4TB WD desktop drive I bought just a few months ago purely to expand the storage in my prior setup, but before I decided to rebuild it all, so it, too, is not entirely committed. THere's an old Seagate 2TB drive I'm keeping primarily from the old box in case there are any old files on there I want to pull out of it, but its already way into the 65,000 hour lifespan, so I wouldn't want to commit anything to it

Thanks for taking the time to read and appreciate any input. No brickbats please LOL

-sd