Done, thanks. As I see align must be in all level of system (ege block devices, vdev, zvol, vm's etc)
Thank you for these important comments, and yes - it will be VM's with MariaDB (InnoDB engine) .
All last tests was with ashift=13 on nvme pool.
~# zpool get all | grep ashift
Thnx, I was change default recordsize to 16K and have double penalty to performance randwrite
Its very important for my VM's with DB's.
Last question. Can I improve performance of my zpool nvme's mirror, if will add SLOG vdev as sata ssd? Or, I'll just have intact dirty data after power loss...
That's normal results 4,5 KIOPS for nvme in zfs mirror at 4k randwrite workloads? Specs says it must be 21 KIOPS. Can you turn me to right way in tweaking zfs for better randwrite 4k?
And my pveperf fsyncs on sata ssd zfs mirror very low (356). ZFS root was installed by default from official...
Thanks. Fio with direct and sync
Compare to zfs mirror on two sata ssd, where OS installed(sync+direct):
pve-manager/6.2-4/9824574a (running kernel: 5.4.41-1-pve)
CPU BOGOMIPS: 319387.52
HD SIZE: 192.77 GB (rpool/ROOT/pve-1)...
I have hetzner AX61 server
2x sata ssd 240Gb for OS (zfs raid-1 uefi boot)
2x toshiba nvme u.2 KXD51RUE3T84 3.84Tb (for data)
Test with fio data pool
zfs pool ashift=12, atime=off, compression=off
Why so huge difference between this results? 42MB/s and 306MB/s...
Without size fio cannot start on Windows VM:
4K-Q1T1-Rand-Write: you need to specify size=
fio: pid=0, err=22/file:filesetup.c:1007, func=total_file_size, error=Invalid argument
I set size=1G in your fio conf for Win VM, on host all rigt without size. And what i was get on single intel ssd on...
Thnk for reply.
Yes, I tried different state with vm disk (writeback, no cache, write trough, direct sync) the best performance in writeback mode, with nocache random read and write lower at twice then writeback
I tried also some virto scisi drivers for win (the last and stable from fedora...