iSCSI/LVM RHEL guest disk scheduler selection

baalkor

New Member
Feb 24, 2025
20
0
1
Dear Folks,

We'll be running RL10 vm on top of SAN+iSCSI+LVM (Thick). Regarding the ." I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest"it seems that deadline should be selected instead of none (noop) by default.
Do you have some experience regarding those change ? Will it be very different in terms of I/O ?

They state :
"Guests that use storage accessed by iSCSI, SR-IOV, or physical device passthrough should not use the noop scheduler. These methods do not allow the host to optimize I/O requests to the underlying physical device."

Does it apply it the VM disk are backed by a LV on top of ISCSI ?
What would be the gain ?

Sincerely
 
Hi @baalkor,

I understand the recommendation in the Red Hat documentation, but the right choice really depends on two factors: what you're optimizing for (latency, IOPS, or bandwidth) and how fast your iSCSI SAN actually is.

In general, with fast storage and modern virtualization hardware, noop tends to provide the best application latency, which is often the primary objective. That said, for general-purpose workloads, most applications won't see a dramatic difference between schedulers.

The best advice I can offer is to use fio to model your application's I/O behavior and validate the choice through testing.

Also, keep in mind that schedulers can be cascaded: one in the guest operating on the virtual block device, and another in the hypervisor submitting I/O to the physical disks.

Good luck! And, remember... testing beats guessing!!!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
In our case we have some servers handling big (1GB - 200 GB) files (HDF5, RDF) and using database stack such as GraphDB.
 
Hi @baalkor,

I understand the recommendation in the Red Hat documentation, but the right choice really depends on two factors: what you're optimizing for (latency, IOPS, or bandwidth) and how fast your iSCSI SAN actually is.

In general, with fast storage and modern virtualization hardware, noop tends to provide the best application latency, which is often the primary objective. That said, for general-purpose workloads, most applications won't see a dramatic difference between schedulers.

The best advice I can offer is to use fio to model your application's I/O behavior and validate the choice through testing.

Also, keep in mind that schedulers can be cascaded: one in the guest operating on the virtual block device, and another in the hypervisor submitting I/O to the physical disks.

Good luck! And, remember... testing beats guessing!!!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you for this answer.
 
I use none/noop on Linux guests since like forever on virtualization platforms. That includes VMware and Proxmox in production with no issues. Per that RH article, I don't use iSCSI/SR-IOV/passthrough. I let the hypervisor's I/O scheduler figure out I/O ordering.
 
  • Like
Reactions: Kingneutron