True
Done, thanks. As I see align must be in all level of system (ege block devices, vdev, zvol, vm's etc)
Thank you for these important comments, and yes - it will be VM's with MariaDB (InnoDB engine) .
All last tests was with ashift=13 on nvme pool.
~# zpool get all | grep ashift
rpool...
Thnx, I was change default recordsize to 16K and have double penalty to performance randwrite
Its very important for my VM's with DB's.
Last question. Can I improve performance of my zpool nvme's mirror, if will add SLOG vdev as sata ssd? Or, I'll just have intact dirty data after power loss...
That's normal results 4,5 KIOPS for nvme in zfs mirror at 4k randwrite workloads? Specs says it must be 21 KIOPS. Can you turn me to right way in tweaking zfs for better randwrite 4k?
And my pveperf fsyncs on sata ssd zfs mirror very low (356). ZFS root was installed by default from official...
Thanks. Fio with direct and sync
Compare to zfs mirror on two sata ssd, where OS installed(sync+direct):
# pveversion
pve-manager/6.2-4/9824574a (running kernel: 5.4.41-1-pve)
# pveperf
CPU BOGOMIPS: 319387.52
REGEX/SECOND: 3132034
HD SIZE: 192.77 GB (rpool/ROOT/pve-1)...
I have hetzner AX61 server
2x sata ssd 240Gb for OS (zfs raid-1 uefi boot)
2x toshiba nvme u.2 KXD51RUE3T84 3.84Tb (for data)
Test with fio data pool
ZFS RAID-1
zfs pool ashift=12, atime=off, compression=off
Why so huge difference between this results? 42MB/s and 306MB/s...
Even...
On windows vm CPU always 100% in use when fio is running (tasmanager)
On debian vm CPU jumping from 35 to 55% (vm top)
On host CPU jumping from 30 to 100% between fio and z_wr_iss (saw in top)
Without size fio cannot start on Windows VM:
4K-Q1T1-Rand-Write: you need to specify size=
fio: pid=0, err=22/file:filesetup.c:1007, func=total_file_size, error=Invalid argument
I set size=1G in your fio conf for Win VM, on host all rigt without size. And what i was get on single intel ssd on...
Thanks, waiting for it.
Can the bottleneck be that the system is installed on the hard drive? All tests I made VMs was located on SSD storage, but OS PVE instaled on hdd hardware raid1.
Thnk for reply.
Yes, I tried different state with vm disk (writeback, no cache, write trough, direct sync) the best performance in writeback mode, with nocache random read and write lower at twice then writeback
I tried also some virto scisi drivers for win (the last and stable from fedora...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.