Done, thanks. As I see align must be in all level of system (ege block devices, vdev, zvol, vm's etc)
Thank you for these important comments, and yes - it will be VM's with MariaDB (InnoDB engine) .
All last tests was with ashift=13 on nvme pool.
~# zpool get all | grep ashift
Thnx, I was change default recordsize to 16K and have double penalty to performance randwrite
Its very important for my VM's with DB's.
Last question. Can I improve performance of my zpool nvme's mirror, if will add SLOG vdev as sata ssd? Or, I'll just have intact dirty data after power loss...
That's normal results 4,5 KIOPS for nvme in zfs mirror at 4k randwrite workloads? Specs says it must be 21 KIOPS. Can you turn me to right way in tweaking zfs for better randwrite 4k?
And my pveperf fsyncs on sata ssd zfs mirror very low (356). ZFS root was installed by default from official...
Thanks. Fio with direct and sync
Compare to zfs mirror on two sata ssd, where OS installed(sync+direct):
pve-manager/6.2-4/9824574a (running kernel: 5.4.41-1-pve)
CPU BOGOMIPS: 319387.52
HD SIZE: 192.77 GB (rpool/ROOT/pve-1)...
I have hetzner AX61 server
2x sata ssd 240Gb for OS (zfs raid-1 uefi boot)
2x toshiba nvme u.2 KXD51RUE3T84 3.84Tb (for data)
Test with fio data pool
zfs pool ashift=12, atime=off, compression=off
Why so huge difference between this results? 42MB/s and 306MB/s...
Without size fio cannot start on Windows VM:
4K-Q1T1-Rand-Write: you need to specify size=
fio: pid=0, err=22/file:filesetup.c:1007, func=total_file_size, error=Invalid argument
I set size=1G in your fio conf for Win VM, on host all rigt without size. And what i was get on single intel ssd on...
Thnk for reply.
Yes, I tried different state with vm disk (writeback, no cache, write trough, direct sync) the best performance in writeback mode, with nocache random read and write lower at twice then writeback
I tried also some virto scisi drivers for win (the last and stable from fedora...
Thnks. I was tested fio with settings like crystaldiskmark:
On Windows VM 4k random read 19.6MB/s and random write 4k 17.8MB/s
On Pve6 4k random read 35.0MB/s and random write 4k 67.8MB/s
Thats dramaticaly low values for ssd in VM, which planed to be mssql ((
barrier=0 in fstab also the same result ((
Maybe CrystalDiskMark 6.02 is not actual in VM and I need alternative benchmarks? What the best tool for monitor real 4k random read and write in Windows VM? Thanks. Maybe fio, what parameters are right?
I have pve 6.0-7 on DELL R730
PVE installed on raid1 hdd
and have ssd raid1 for mssql VMs and sql data
SSD model: 2 x SSDSC2KB480G8R Dell Certified Intel S4x00/D3-S4x10 Series SSDs (Intel d3 s4510 480Gb)
On ssd raid1 created gpt partition in ext4
Created Windows 2016 Standart VM as...