ZFS is a hightech filesystem which could not be denied but it has it's pros and even cons.
What I really really like in zfs is the ability to (d)raid(z) nvme's with really good r/w performance
while mdadm get's horrible write performance, read could be 30% better but that's no advantage against a factor 7 slower in write.
For 16-24 nvme's you are able to get lot better performance with 2 actual hw-raid ctrl. (eg perc12 h965i) but that comes at a luxus price.
Second is checksum of data which could be otherwise with xfs only reached with external raid storage system.
Third is if one use virtualization just changed blocks of virtual disks will be replicated by zfs send/recv.
Fourth I like in zfs is it's snapshot implementation, could do one every minute 10 years long
against any kind of mistakes of hw, sw(os+app), user and even virus etc manipulation
but it could be perhaps never needed, this feature is an insurence into the future.
And don't to forget here the zfs community always helpful and so much responseful - nicest ever !!
The cons of zfs are performance when you work with millions of files and tens to hundrets of TB
and that's while advertising as zetabyte filesystem but still struggling really hard long before the PB range.
If you generate 1TB of new data and eg remove 500GB every day, oh what a pity for zfs send/recv
instead of using rsync over nfs-mount to not have the ssh core 100% limit while even using parallel rsync's.
Second there are these endless zfs kernel tuning parameter which you can change small or big
but really in performance there's nothing changing remarkable. Most effect could be reached by changing
recordsize of datasets. If you go to 64k, 32k, 16k, 8k metadata performance get better but at same moment throughput goes even more and more down,
if you take bigger recordsize to 256k/.../1M/.../16M metadata performance fall into a black hole.
So in my opinion the default of 128k is best compromise in a mix of files environment with different sizes and types,
or if using zfs special device also recordsize=1M.
Third again performance to nfs, metadata find ..., read could better but write ..., then you ask to SLOG ... but what is zfs doing ?
Sync data is written to pool slowly without slog. When have slog it's first written to there and stay there for hopefully nothing
and now that data must still written unchanged slow to pool ... so while this is still writing all same time reads are slowed down
- thing to a computed results file like 500g - for a quiet long time also.
Fourth there's this zpool cannot import problem, if you google "cannot import" daily 30 days with filter "last 24h" you find 5-10 posts each week.
Doing all these guys wrong (and why while docu to zfs is really good) or all using weak hardware ?? Hands up - I don't know.