Very slow I/O speed with kernel 6.14

seanr22a

Member
Dec 16, 2022
8
2
8
I updated from kernel 6.8.12-11-pve to the latest 6.14 available today 6.14.5-1-bpo12.

The disk I/O speed dropped from read: IOPS=123k to read: IOPS=37.3k. I rebooted into kernel 6.8.12-11-pve and I have IOPS=123k again - something is wrong with the 6.14 kernel or maybe the ZFS fixes made for the 6.14 kernel or maybe I’m missing something.

A little background:
Supermicro server, dual Xeon E5-2696 v4 with 1TB memory.
Broadcom/LSI SAS3224 SAS-3 HBA
16pcs 7200RPM WD Enterprise SAS3 HDD in a ZFS Raid-10


This is the fio command run on the host:
fio job-cpu1.fio | grep "read:\|write:\|READ:\|WRITE:"

The job-cpu1.fio file:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern

[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
numjobs=1
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1 Linear
# iodepth=4 Very Light
# iodepth=8 Light
# iodepth=64 Moderate
# iodepth=256 Heavy
iodepth=64
cpus_allowed=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65

With kernel 6.14:
read: IOPS=37.3k, BW=171MiB/s (179MB/s)(3279MiB/19178msec)
write: IOPS=9356, BW=42.6MiB/s (44.6MB/s)(817MiB/19178msec); 0 zone resets
READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=3279MiB (3439MB), run=19178-19178msec
WRITE: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=817MiB (856MB), run=19178-19178msec

With kernel 6.8.12-11
read: IOPS=123k, BW=566MiB/s (593MB/s)(3279MiB/5795msec)
write: IOPS=31.0k, BW=141MiB/s (148MB/s)(817MiB/5795msec); 0 zone resets
READ: bw=566MiB/s (593MB/s), 566MiB/s-566MiB/s (593MB/s-593MB/s), io=3279MiB (3439MB), run=5795-5795msec
WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=817MiB (856MB), run=5795-5795msec

I've rebooted between the kernels a couple of times but the difference stays the same.
Is there something I'm doing wrong or is it kernel 6.14 that is buggy?
 
Looks like I'm the only one .... just to get something to compare with I updated to the 614 kernel on a small Xeon D-1541 server, it has 4 SATA drives in a ZFS Raid10. Exactly the same thing with about the same percentage in performace diff between 6.8 and 6.14 as I posted here yesterday. When I go back to 6.8 performance are back to normal.

I removed the 6.14 on both servers and continue running with 6.8.12-11 its working just fine. I hope the issues with ZFS performance are solved when 6.14 becomes the standard kernel.
 
I'm having similar issues... pretty much any backups are running fast read from source, then slow write to NFS or local disk.
But if I run via CLI dd etc etc fine. Have this issue on 3 hosts running proxmox 9 with 6.14.11-2-pve or 6.14.11-4-pve and even tried an older kernel.

I thought, ok this might be the hardware raid1, so I reinstalled proxmox with zfs HBA disks and same issue.




1761516531449.png
Code:
INFO: include disk 'scsi0' 'msa-vol1:vm-705-disk-0' 50G
INFO: include disk 'scsi1' 'msa-vol2:vm-705-disk-0' 250G
INFO: creating vzdump archive '/mnt/pve/Proxmox-Backups/dump/vzdump-qemu-705-2025_10_27-18_17_36.vma.zst'
INFO: starting kvm to execute backup task
INFO: started backup task 'a43dc7aa-2ba5-4648-b053-0b12b03b515e'
INFO:   0% (623.5 MiB of 300.0 GiB) in 3s, read: 207.8 MiB/s, write: 96.8 MiB/s
INFO:   1% (5.9 GiB of 300.0 GiB) in 6s, read: 1.8 GiB/s, write: 6.3 MiB/s
INFO:   3% (11.5 GiB of 300.0 GiB) in 9s, read: 1.9 GiB/s, write: 430.7 KiB/s
INFO:   4% (12.8 GiB of 300.0 GiB) in 12s, read: 415.9 MiB/s, write: 2.7 KiB/s
INFO:   5% (15.7 GiB of 300.0 GiB) in 15s, read: 1016.4 MiB/s, write: 50.7 KiB/s