Specification | Micron 7450 Pro | Samsung PM9A3 M.2 | Kingston Data Center DC2000M |
Interface | PCIe 4.0 x4 | PCIe 4.0 x4 | PCIe 3.0 x4 |
Form Factor | U.3, M.2 2280, E1.S | M.2 2280 | U.2 (2.5”) |
NAND Type | 176-layer 3D TLC | 128-layer 3D TLC | 96-layer 3D TLC |
Sequential Read | Up to 6,800 MB/s | Up to 6,500 MB/s | Up to 3,100 MB/s |
Sequential Write | Up to 5,600 MB/s | Up to 3,500 MB/s | Up to 2,600 MB/s |
Random Read (4K IOPS) | Up to 1,000,000 IOPS | Up to 900,000 IOPS | Up to 275,000 IOPS |
Random Write (4K IOPS) | Up to 180,000–300,000 IOPS | Up to 170,000 IOPS | Up to 70,000 IOPS |
Capacity Range | 400 GB to 15.36 TB | 960 GB to 7.68 TB | 960 GB to 7.68 TB |
Endurance (DWPD) | 0.7–3.0 DWPD (depending on model) | 1.3 DWPD | 0.5 DWPD |
Endurance (TBW) | Up to 32.5 PBW (15.36 TB model) | ~14 PBW (7.68 TB model) | ~5.4 PBW (7.68 TB model) |
Power Loss Protection | Yes | Yes | Yes |
Encryption | TCG Opal 2.01, IEEE-1667 | TCG Opal | TCG Opal 2.0 |
Target Market | Mixed workload, read-intensive, 24/7 | Read-intensive, cloud servers | Read-intensive, data centers |
Interesting perspective on SSD wearout (assuming that's what you are referring to). I have a 5 year old server running Proxmox and ZFS on M.2 drives, they are just now at 29% wearout. How long do you expect them run for? I have 10 more years on mine, just curious?ZFS is rough on SSDs as far as i know. ( i avoid it on SSDs because of that, never even tried it), any of those will probably be alright, the Micron looks best out of the details provided.
What brand are you using ?Interesting perspective on SSD wearout (assuming that's what you are referring to). I have a 5 year old server running Proxmox and ZFS on M.2 drives, they are just now at 29% wearout. How long do you expect them run for? I have 10 more years on mine, just curious?
that is really good, just most ive seen do not wear so well unless you dont use them much, i tend to use my system fairly heavily and i do not currently have enterprise SSD drives either (only enterprise HDDs currently), so i wouldnt even risk it without enterprise drives. but even then, with ZFS you are making extra writes, its copy on write, keeps multiple metadata copies, ultra small files count against the blocks towards the wear level so writing small files can lead to higher wear level despite not actually writing that much data, etc. just seems to me like under heavy use you're at the least basically cutting the life of the SSD in half to use ZFS.Interesting perspective on SSD wearout (assuming that's what you are referring to). I have a 5 year old server running Proxmox and ZFS on M.2 drives, they are just now at 29% wearout. How long do you expect them run for? I have 10 more years on mine, just curious?
Samsung MZ1LB960HAJQ-0007What brand are you using ?
It’ quite goodSamsung MZ1LB960HAJQ-0007
here's smart data on one of them:
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 39 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 29%
Data Units Read: 123,137,498 [63.0 TB]
Data Units Written: 1,373,058,239 [703 TB]
Host Read Commands: 14,735,929,268
Host Write Commands: 79,932,524,714
Controller Busy Time: 28,302
Power Cycles: 36
Power On Hours: 26,540
Unsafe Shutdowns: 15
Media and Data Integrity Errors: 0
Error Information Log Entries: 17
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 39
CelsiusTemperature Sensor 2: 50
CelsiusTemperature Sensor 3: 55 Celsius
Also, I have 16 of them in ZRAID10. The host is running 60 Win11 VM's on those drives for thin client kiosks in our public area.that is really good, just most ive seen do not wear so well unless you dont use them much, i tend to use my system fairly heavily and i do not currently have enterprise SSD drives either (only enterprise HDDs currently), so i wouldnt even risk it without enterprise drives. but even then, with ZFS you are making extra writes, its copy on write, keeps multiple metadata copies, ultra small files count against the blocks towards the wear level so writing small files can lead to higher wear level despite not actually writing that much data, etc. just seems to me like under heavy use you're at the least basically cutting the life of the SSD in half to use ZFS.
but if you're getting that slow of wear, i can definitely see how it is the better choice for many.
that is really, really good then and sounds like you have a great setup. that is interesting to see, didn't think the wear rate was quite that good. i really need to get some of those drives. haha. (currently using consumer drives, which arent bad really but im also not using ZFS on them either, it would help if i could. esp with the coming ZFS 2.3 with fast dedup and direct i/o that will improve those drives and zfs a lot )Also, I have 16 of them in ZRAID10. The host is running 60 Win11 VM's on those drives for thin client kiosks in our public area.
I'm not saying that, my SSD's are not consumer. But if you had enough of them and ran RAID10 like mine then the wearout per drive is better the more you add. That's the takeaway here.So I could build a cluster with zfs and replication with consumer nvme. Interesting
Micron 7450 pro is M2 2280M2 2280 with PLP is Kingston only (AFAIK).
Others like Micron and Samsung are M2 22110mm lenght and require heatsink.
Space and good workflow are required too which exclude mini PC.
We use essential cookies to make this site work, and optional cookies to enhance your experience.