ZFS - What NVME M2 ?

Sorry, no recommendation from my side. My main criteria would be: "Power Loss Protection"/"PLP" present?

Most manufacturers have publicly accessible datasheets of their products. If you had linked them I possibly would've looked at them.
 
Last edited:
Specification
Micron 7450 Pro
Samsung PM9A3 M.2
Kingston Data Center DC2000M
InterfacePCIe 4.0 x4PCIe 4.0 x4PCIe 3.0 x4
Form FactorU.3, M.2 2280, E1.SM.2 2280U.2 (2.5”)
NAND Type176-layer 3D TLC128-layer 3D TLC96-layer 3D TLC
Sequential ReadUp to 6,800 MB/sUp to 6,500 MB/sUp to 3,100 MB/s
Sequential WriteUp to 5,600 MB/sUp to 3,500 MB/sUp to 2,600 MB/s
Random Read (4K IOPS)Up to 1,000,000 IOPSUp to 900,000 IOPSUp to 275,000 IOPS
Random Write (4K IOPS)Up to 180,000–300,000 IOPSUp to 170,000 IOPSUp to 70,000 IOPS
Capacity Range400 GB to 15.36 TB960 GB to 7.68 TB960 GB to 7.68 TB
Endurance (DWPD)0.7–3.0 DWPD (depending on model)1.3 DWPD0.5 DWPD
Endurance (TBW)Up to 32.5 PBW (15.36 TB model)~14 PBW (7.68 TB model)~5.4 PBW (7.68 TB model)
Power Loss ProtectionYesYesYes
EncryptionTCG Opal 2.01, IEEE-1667TCG OpalTCG Opal 2.0
Target MarketMixed workload, read-intensive, 24/7Read-intensive, cloud serversRead-intensive, data centers

TBW is great for Samsung apparently
 
  • Like
Reactions: UdoB
ZFS is rough on SSDs as far as i know. ( i avoid it on SSDs because of that, never even tried it), any of those will probably be alright, the Micron looks best out of the details provided.
 
Hi!

I prefer the following order:
- Kioxia
- Micron
- Seagate
- Samsung

M.2 is the "consumer" grade not "enterprise" form factor (example: no hot swap function ).
U3/U2/E1/E3 are the "enterprise" form factor.
 
  • Like
Reactions: stanthewizzard2025
M.2 2280 is also an enterprise form factor with greater length that does not fit some consumer products but is compatible with many i would also specifically avoid seagate because of the high failure rate and hit and miss models. some models are just terrible and fail like crazy while others are decent. kind of a big risk if you care about your data.
 
ZFS is rough on SSDs as far as i know. ( i avoid it on SSDs because of that, never even tried it), any of those will probably be alright, the Micron looks best out of the details provided.
Interesting perspective on SSD wearout (assuming that's what you are referring to). I have a 5 year old server running Proxmox and ZFS on M.2 drives, they are just now at 29% wearout. How long do you expect them run for? I have 10 more years on mine, just curious?
 
Interesting perspective on SSD wearout (assuming that's what you are referring to). I have a 5 year old server running Proxmox and ZFS on M.2 drives, they are just now at 29% wearout. How long do you expect them run for? I have 10 more years on mine, just curious?
What brand are you using ?
 
Interesting perspective on SSD wearout (assuming that's what you are referring to). I have a 5 year old server running Proxmox and ZFS on M.2 drives, they are just now at 29% wearout. How long do you expect them run for? I have 10 more years on mine, just curious?
that is really good, just most ive seen do not wear so well unless you dont use them much, i tend to use my system fairly heavily and i do not currently have enterprise SSD drives either (only enterprise HDDs currently), so i wouldnt even risk it without enterprise drives. but even then, with ZFS you are making extra writes, its copy on write, keeps multiple metadata copies, ultra small files count against the blocks towards the wear level so writing small files can lead to higher wear level despite not actually writing that much data, etc. just seems to me like under heavy use you're at the least basically cutting the life of the SSD in half to use ZFS.

but if you're getting that slow of wear, i can definitely see how it is the better choice for many.
 
Last edited:
What brand are you using ?
Samsung MZ1LB960HAJQ-0007
here's smart data on one of them:

SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 39 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 29%
Data Units Read: 123,137,498 [63.0 TB]
Data Units Written: 1,373,058,239 [703 TB]
Host Read Commands: 14,735,929,268
Host Write Commands: 79,932,524,714
Controller Busy Time: 28,302
Power Cycles: 36
Power On Hours: 26,540
Unsafe Shutdowns: 15
Media and Data Integrity Errors: 0
Error Information Log Entries: 17
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 39
CelsiusTemperature Sensor 2: 50
CelsiusTemperature Sensor 3: 55 Celsius
 
Samsung MZ1LB960HAJQ-0007
here's smart data on one of them:

SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 39 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 29%
Data Units Read: 123,137,498 [63.0 TB]
Data Units Written: 1,373,058,239 [703 TB]
Host Read Commands: 14,735,929,268
Host Write Commands: 79,932,524,714
Controller Busy Time: 28,302
Power Cycles: 36
Power On Hours: 26,540
Unsafe Shutdowns: 15
Media and Data Integrity Errors: 0
Error Information Log Entries: 17
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 39
CelsiusTemperature Sensor 2: 50
CelsiusTemperature Sensor 3: 55 Celsius
It’ quite good
 
that is really good, just most ive seen do not wear so well unless you dont use them much, i tend to use my system fairly heavily and i do not currently have enterprise SSD drives either (only enterprise HDDs currently), so i wouldnt even risk it without enterprise drives. but even then, with ZFS you are making extra writes, its copy on write, keeps multiple metadata copies, ultra small files count against the blocks towards the wear level so writing small files can lead to higher wear level despite not actually writing that much data, etc. just seems to me like under heavy use you're at the least basically cutting the life of the SSD in half to use ZFS.

but if you're getting that slow of wear, i can definitely see how it is the better choice for many.
Also, I have 16 of them in ZRAID10. The host is running 60 Win11 VM's on those drives for thin client kiosks in our public area.
 
Also, I have 16 of them in ZRAID10. The host is running 60 Win11 VM's on those drives for thin client kiosks in our public area.
that is really, really good then and sounds like you have a great setup. that is interesting to see, didn't think the wear rate was quite that good. i really need to get some of those drives. haha. (currently using consumer drives, which arent bad really but im also not using ZFS on them either, it would help if i could. esp with the coming ZFS 2.3 with fast dedup and direct i/o that will improve those drives and zfs a lot )
 
M2 2280 with PLP is Kingston only (AFAIK).
edit : Micron 7450 PRO available in 480GB and 960GB in 2280.

Others like Micron and Samsung are M2 22110mm lenght and require heatsink.
Space and good workflow are required too which exclude mini PC.
 
Last edited:
  • Like
Reactions: stanthewizzard2025