I had many problems related to VERY HIGH iowait on my Proxmox VE Systems using Crucial MX500, which is a Consumer SSD (and one with very little DRAM, so once that fills up, Performance drops like a Rock).
Now I got 2 x Intel DC S3610 1.6 TB SSDs which should be very good for VM Storage or also more intensive stuff, since I got Reccomendation from many Sources that - even second Hand - Enterprise SSDs have WAY better Performance.
According to the Datasheet, these should be rated at 10.7 PB (10'700 TB) Write Endurance.
Looking at the SMART Data, Assuming it wasn't tampered with, shows that there were approx. 4.2 TB of Data Written, although by doing a Pass with
Anyways, the main Concern is the HUGE RAW Value for
On one Disk
Badblocks didn't report any Bad Blocks but it still doesn't inspire Confidence. It might be Flash/Controller Calendar Aging, because surely it's not Wearout at 0.3% of the Endurance Rating .
Not sure if I should Return these to the Seller (it's going to be quite Expensive to ship these back to the U.S.) or what to do with them. Should i try to put them in a ZFS Pool, do some fio on it and kind of stress Test ?
Now I got 2 x Intel DC S3610 1.6 TB SSDs which should be very good for VM Storage or also more intensive stuff, since I got Reccomendation from many Sources that - even second Hand - Enterprise SSDs have WAY better Performance.
According to the Datasheet, these should be rated at 10.7 PB (10'700 TB) Write Endurance.
Looking at the SMART Data, Assuming it wasn't tampered with, shows that there were approx. 4.2 TB of Data Written, although by doing a Pass with
badblocks
it seems (by analyzing the Differences before/after) that that Calculation is 8x an UNDERESTIMATE, probably due to the wrong Block/Sector Size Used in my Calculation (probably a 512b vs 4096b Issue):
Code:
echo "GB Written: $(echo "scale=3; $(sudo /usr/sbin/smartctl -A /dev/disk/by-id/ata-XXXXXXXXXXXXXXXXXXXXXXX | grep "Total_LBAs_Written" | awk '{print $10}') * 512 / 1073741824" | bc | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta')"
Anyways, the main Concern is the HUGE RAW Value for
Raw_Read_Error_Rate
and Read_Soft_Error_Rate
.On one Disk
Raw_Read_Error_Rate
went from 3211595278 to 4294967295.Badblocks didn't report any Bad Blocks but it still doesn't inspire Confidence. It might be Flash/Controller Calendar Aging, because surely it's not Wearout at 0.3% of the Endurance Rating .
Not sure if I should Return these to the Seller (it's going to be quite Expensive to ship these back to the U.S.) or what to do with them. Should i try to put them in a ZFS Pool, do some fio on it and kind of stress Test ?