Greetings everyone,
Good news! We've managed to carve out some time to compile the latest data set for our ongoing exploration of Windows 2022 server on Proxmox.
Part 3: Computational Efficiency of Storage Controller Configurations under Constrained Bandwidth Workloads
Expanding upon the groundwork laid in Part 1, our third installment goes deeper into the computational efficiency of all storage controller configurations. While Part 2 scrutinized efficiency under an IOPS-based workload, Part 3 shifts its focus to efficiency under bandwidth-constrained workloads. To clarify, Part 2 assessed efficiency with a larger number of small-sized I/O requests, while Part 3 analyzes the efficiency of larger-sized I/O requests that are fewer in number. You can find Part 3 here:
TLDR: Here are the key takeaways of the bandwidth-constrained efficiency study in part 3:
Part 1: https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-1.html
Part 2: https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-2.html
If you find this helpful, please let me know. Questions, comments, and corrections are always welcome.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Good news! We've managed to carve out some time to compile the latest data set for our ongoing exploration of Windows 2022 server on Proxmox.
Part 3: Computational Efficiency of Storage Controller Configurations under Constrained Bandwidth Workloads
Expanding upon the groundwork laid in Part 1, our third installment goes deeper into the computational efficiency of all storage controller configurations. While Part 2 scrutinized efficiency under an IOPS-based workload, Part 3 shifts its focus to efficiency under bandwidth-constrained workloads. To clarify, Part 2 assessed efficiency with a larger number of small-sized I/O requests, while Part 3 analyzes the efficiency of larger-sized I/O requests that are fewer in number. You can find Part 3 here:
TLDR: Here are the key takeaways of the bandwidth-constrained efficiency study in part 3:
- The
virtio-scsi
controller withaio=native
achieves the best overall efficiency score. aio=native
was the most efficient aio mode for each controller type;aio=threads
was the least efficient.- With
aio=native
,virtio-scsi
was 4% more CPU intensive thanvirtio-blk
but generates 25% fewer context switches. - With
virtio-scsi
andaio=native
, aniothread
introduces a small CPU efficiency overhead of 1.5%, but reduces context switches by 5% vmware-pvscsi
was the most efficient storage controller option (for bandwidth) natively supported by Windows Server 2022.vmware-pvscsi
withaio=native
consumes 60% less CPU and generates 40% fewer context switches thanvmware-pvscsi
withaio=iouring
.- The
SATA
andIDE
controllers achieve the worst efficiency scores primarily due to high context switching rates.
Part 1: https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-1.html
Part 2: https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-2.html
If you find this helpful, please let me know. Questions, comments, and corrections are always welcome.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Last edited: