What ashift for ZFS SSDs?

Dunuin

Distinguished Member
Jun 30, 2020
14,796
4,718
258
Germany
Hi,

I've got Samsung 970 Evo (MZ-V7E500BW) and Crucial BX500 (CT120BX500SSD1) SSDs and I'm not sure what value to set for ashift setting up zfs.

Samsungs datasheet doesn't mentions anything about sector size. Crucial states "Industry-standard, 512-byte sector size support" but not if it is the physical sector size. I used Win10 to run "fsutil fsinfo sectorInfo D:" for both drives and it tells me that the physical and logical sector sizes of both SSDs are 512B and NTFS clustersize is 4K.

1.) I've read it isn't good to choose 512B because if I need to replace a drive it might be hard to find drives with 512B physical sector size because 4k or 8k might be the standard then.
2.) Is it possible that the SSDs in reality just use 512B emulation and really work with 4k or 8k internally and Windows just reports them as 512B physical sector size?

I want the SSDs to wear out as slowly as possible. As far as I understand the SSDs will wear out much faster if I use 4K if it is working internally with 512B because it is writing more flash cells then needed if a write operation is less then 4k. But if it is internally working with 4K or 8K it and I will use 512B it will wear out faster too and will be slower, because it has to read, delete and write again the full 4K or 8K if it just wants to change 512B of the flash cell.

What ashift should I use for the two SSDs?
 
most ssds these days are using 8k as internal block size (even they advertise as 512)

also you could try to create a pool without specifying any ashift, and see what zfs is telling you (they have a list of hdd/sdd models with the right sector size, maybe your drives are in their list)
 
Please search the forums why using consumer grade SSDs is not good for using with ZFS and PVE and why they will fail fast.
I heard that, but new enterprise SSDs are way over my budget and buying used worn out enterprise SDDs didn't sound like a good idea either.
I don't know how they will wear out on Proxmox, but I SMART monitored my consumer SSDs for a year on my FreeNAS server running 10 jails/VMs and there they only written 47GB per day which should be around 20 years until reaching TBW.
 
I heard that, but new enterprise SSDs are way over my budget and buying used worn out enterprise SDDs didn't sound like a good idea either.
I don't know how they will wear out on Proxmox, but I SMART monitored my consumer SSDs for a year on my FreeNAS server running 10 jails/VMs and there they only written 47GB per day which should be around 20 years until reaching TBW.

Good to know that you know the drill :-D
Monitoring is key for consumer SSDs.

I bought a lot of "used" enterprise SSDs, and they all had still over 90%, most even 95% wearout left. So I guess you need to be lucky.
 
Any other recommendations to save wearing? I also ZFS formated a single Intel S3700 200GB Enterprise SATA SSD and a single 2TB enterpise SSD.
I thought I might be a good idea to create 2 virtual disks images for every VM. Root goes on the fast mirrored consumer SSDs and "/tmp" and swap (with low swappiness) to the unmirrored enterpise SSD.
I'm not sure how but I will try to save the VM snapshots and backups to the enterprise HDD.
 
I'm not sure how but I will try to save the VM snapshots and backups to the enterprise HDD.

Snapshots are stored in the underlying storage, so if you have a "fast" and "slow" ZFS pool, your disk snapshots of disks from the "fast" pool will be saved in the "fast" pool and the same for the "slow" one.