Recent content by Detuner

  1. D

    ZFS: fio random read performance not scaling with iodepth

    iostat output during ZVol benching (bs=8k, numjobs=1, iodepth=32): Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util sda 26015.00 0.00 203.24 0.00 0.00 0.00 0.00 0.00 0.22...
  2. D

    ZFS: fio random read performance not scaling with iodepth

    Thanks a lot for your reply! I'm benching both read and write, just stuck a bit on this weird read results, where I totally did not expect any problems. Direct SSD read was benched with "filename=/dev/sda" in fio job description. KVM ZVol results are what I see inside VM, 8k randread from 16G...
  3. D

    ZFS: fio random read performance not scaling with iodepth

    Hi! I've set about figuring out exactly how big is IO performance drop in KVM compared to host ZFS performance. I have a Supermicro platform, 2 x Xeon Gold 6226R, 128 Gb DDR-4 RAM. The storage is 2 x Intel D3-S4610 (ssdsc2kg480g8) in ZFS mirror, pool ashift set to 13. Fresh install of PVE 6.3-2...