Proxmox 9.1 and extremely slow disk performance

Nov 29, 2025
2
0
1
I have HPE DL20gen10+ with 2 x 2TB BX500 RAID1 on SmartArray E208i-a hardware RAID upgraded vrom Proxmox 9.0 to 9.1.

After upgrade to Proxmox 9.1 and issuing simple VM backup or cloning operation the system rapidly loose any performance and access even the the GUI itself.

SSH session into the system revealed some system load, but dmesg had a lot of:
[124112.750540] smartpqi 0000:08:00.0: scsi 0:1:0:0: waiting 40 seconds for LUN reset to complete (188 command(s) outstanding)
[124116.210549] smartpqi 0000:08:00.0: TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 0000000018ea4479: SUCCESS
[124116.210569] smartpqi 0000:08:00.0: attempting TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 00000000809b44fa
[124116.210570] smartpqi 0000:08:00.0: scsi 0:1:0:0 for SCSI cmd at 00000000809b44fa already completed
[124116.210577] smartpqi 0000:08:00.0: attempting TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 00000000a785e80f
[124122.990475] smartpqi 0000:08:00.0: scsi 0:1:0:0: waiting 50 seconds for LUN reset to complete (176 command(s) outstanding)

Searching around the web did gave some clues about possible kernel issues, but nothing specific.

Just in case I downgraded kernel from 6.17.2-1-pve to 6.14.11-4-pve only to come down to with the same errors and basically a dead system.
With a downgrade to 6.14.8-2-pve does give me dmesg:
[46611.559859] smartpqi 0000:08:00.0: attempting TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 0000000094b25bbb
[46611.794531] smartpqi 0000:08:00.0: TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 0000000094b25bbb: SUCCESS
[46611.794540] smartpqi 0000:08:00.0: attempting TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 00000000e2c86dcb
[46611.997323] smartpqi 0000:08:00.0: TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 00000000e2c86dcb: SUCCESS
[46611.997326] smartpqi 0000:08:00.0: attempting TASK ABORT on scsi 0:1:0:0 for SCSI cmd at 00000000071d51bc


But the system overall is still usable system with these copy and clone operations running on that old kernel.

Is that some regression what has not been reported or is it just my combination of hardware not happy with these latest kernels?
What should be done here?
 
Thanks for the hint. Nevertheless that is not an answer I expected to have. This is not expected behaviour for that class of drives. I have able to use these drives for a VMs on the very same machine when I ran ESXi V6. 0.7 and V8 VMs with no issues. My question is rather - what is fundamentally different here for the Proxmox for handling the very same hardware/disks?
 
Last edited:
The Crucial BX500 SSDs my be faster with a new disk and will slow down after usage.
QLC mean the SSD Flash must store 4 Bit at once.

Proxmox VE Version 9 is based on debian 13, so check the Internet about your Hardware and debian 13.
Maybe you can install debian 13 on your system an run some fio tests.

But i would change all to enterprise hardware and never think of it.
 
  • Like
Reactions: UdoB