Worse performance with higher specs server

2700 fsyncs still sound low (should get close to double at least), but much better than the 900 you had before.
I would say, for Consumer SSD's like 870 EVO's etc... 900 is great.
For enterprise SSD's probably crap, you're right, i simply skipped on my side enterprise SSD's and gone directly to Enterprise NVME's.
So thats why i don't have any experience with Enterprise SSD's.

All new servers that i build this times, are using only NVME's because i can save on a HBA/Tri-Mode/Raid-Controller, so the actual Price difference is minimal in the end and the performance increase is huge.

The only issue i have with something like huge nvme-arrays, is just ZFS. The penalty from ZFS itself is huge. I can't for example get read speeds above 20GB/s after all possible tuning, like a hard limit. Even much much worse inside a VM on a Zvol.
With LVM/LVM-Thin or a simple md array with ext4, im getting easily over 40GB/s. But well, im stuck with ZFS for multiple reasons, speed isn't everything xD

However, just mentioned so that people don't get big dreams if they are using Consumer SSD's :)
 
PS: i forgot to mention:
echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Should increase fsync twice.
Thats a dirty hack, but makes it a lot more reliable to test one server against another. Because with ondemand you don't know at which frequency your test was done, so its less comparable.

Cheers