Hey there.
I've built a new proxmox 4 machine for the small business I work at, which will mostly be running a postgres database and a few web hosts in LXC, in addition to some windows kvm test environments, upgrading from an old Core 2 Quad system running PVE3.4
I'm using a pair of Samsung 850 Evos (as I figured their performance and longevity was "good enough" for what we need, rather than going for the Pro models) mirrored via ZFS through the installer.
(Sidenote: it'd be rad if proxmox automatically created the zfs storage for me, or directed me to do it rather than spewing errors when trying to create vms/cts on the local storage. Either way it's working now.)
Anyways, I've just realized that ZoL doesn't support TRIM. Oops. Technically I could get past this by putting the drives in my FreeNAS box, using a spare drive for the root storage, and connecting via iSCSI, but I don't think that would be an improvement unless I upgrade to 10g lan.
Have I fucked up by using SSDs, consumer-grade ones at that, with ZFS on Linux? Should I expect a headache 3-6 months from now when everything starts crawling? All my googling leads me to believe that TRIM is a "big deal," but I don't know how big when it comes to this. Does anyone have any experience with SSDs in ZFS on Proxmox?
Thanks at lot.
EDIT - Feb 21, 2017
One year after this post was made, I decided to check in with my results for anyone else doing research. It's been almost a year since this system was deployed using a pair of 250GB Samsung 850 Evos (TLC flash) mirrored with ZFS.
The system hosts a modest postgres database, a small handful of django web servers, and a bunch of random test machines. The database is dumped to /tmp/ hourly and moved to our NAS as a backup, which is probably a major contributor to the writes.
Total_LBAs_Written translates to about 20TB written in the last 12 months, or about 1.66TB/month. Given the workload of a hypervisor and the copy-on-write nature of ZFS, I'd expect this to be high, but I've probably been a little hard on the drives with the backups.
For comparison, I looked at a nearby workstation using a 120GB 840 Evo that was purchased 45 months ago. It has about 10TB written, or 0.22TB/month. All other factors aside, we could then say that my particular PVE ZFS write workload is about 7.5x the workload of a simple windows office workstation.
So yeah, it's a lot. But even so, if we compare our TB written to Tech Report's SSD endurance test (and we assume that 850 Evos behave anything like the 840 Evos they tested), we're still merely a tenth of the way written until the drives will start reallocating sectors, and a hundredth of the way towards the drives outright dying.
Hope this helps anyone looking for data.
I've built a new proxmox 4 machine for the small business I work at, which will mostly be running a postgres database and a few web hosts in LXC, in addition to some windows kvm test environments, upgrading from an old Core 2 Quad system running PVE3.4
I'm using a pair of Samsung 850 Evos (as I figured their performance and longevity was "good enough" for what we need, rather than going for the Pro models) mirrored via ZFS through the installer.
(Sidenote: it'd be rad if proxmox automatically created the zfs storage for me, or directed me to do it rather than spewing errors when trying to create vms/cts on the local storage. Either way it's working now.)
Anyways, I've just realized that ZoL doesn't support TRIM. Oops. Technically I could get past this by putting the drives in my FreeNAS box, using a spare drive for the root storage, and connecting via iSCSI, but I don't think that would be an improvement unless I upgrade to 10g lan.
Have I fucked up by using SSDs, consumer-grade ones at that, with ZFS on Linux? Should I expect a headache 3-6 months from now when everything starts crawling? All my googling leads me to believe that TRIM is a "big deal," but I don't know how big when it comes to this. Does anyone have any experience with SSDs in ZFS on Proxmox?
Thanks at lot.
EDIT - Feb 21, 2017
One year after this post was made, I decided to check in with my results for anyone else doing research. It's been almost a year since this system was deployed using a pair of 250GB Samsung 850 Evos (TLC flash) mirrored with ZFS.
The system hosts a modest postgres database, a small handful of django web servers, and a bunch of random test machines. The database is dumped to /tmp/ hourly and moved to our NAS as a backup, which is probably a major contributor to the writes.
Total_LBAs_Written translates to about 20TB written in the last 12 months, or about 1.66TB/month. Given the workload of a hypervisor and the copy-on-write nature of ZFS, I'd expect this to be high, but I've probably been a little hard on the drives with the backups.
For comparison, I looked at a nearby workstation using a 120GB 840 Evo that was purchased 45 months ago. It has about 10TB written, or 0.22TB/month. All other factors aside, we could then say that my particular PVE ZFS write workload is about 7.5x the workload of a simple windows office workstation.
So yeah, it's a lot. But even so, if we compare our TB written to Tech Report's SSD endurance test (and we assume that 850 Evos behave anything like the 840 Evos they tested), we're still merely a tenth of the way written until the drives will start reallocating sectors, and a hundredth of the way towards the drives outright dying.
Hope this helps anyone looking for data.
Last edited: