Hey everyone,
Given the recent developments with VMware/Broadcom, we've noticed a surge in inquiries related to running Windows on Proxmox. To assist those transitioning away from VMware, we've thoroughly examined Windows Server 2022 storage controller compatibility, native driver support, storage performance, migration guidelines, and system efficiency using the latest hardware available.
We've got a lot of data, so we're releasing it in multipart form to keep it digestible.
Part 1: An Introduction to Supported Windows Storage Controllers, AIO Modes, and Efficiency Metrics
This initial technote establishes foundational system concepts for efficiency comparisons in upcoming releases. We delve into Proxmox storage controllers, Windows driver compatibility, Asynchronous I/O modes, and IOThreads. Additionally, we provide insights into our testing environment. You can find Part 1 here.
Part 2: Computational Efficiency of Storage Controller Configurations under Constrained IOPS Workload
Building upon the foundational knowledge laid out in Part 1, the second installment takes a deep dive into the computational efficiency of every possible storage controller configuration. We present raw efficiency data, taking into account CPU utilization and operating system context switches. Part 2 can be accessed here.
TLDR: Here are the key takeaways of the IOPS efficiency study in part 2:
For IOPS-intensive workloads operating in a Windows 2022 server guest with iSCSI shared storage:
1: https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-2.html
If you find this helpful, please let me know. More data and analysis to come. Questions, comments, and corrections are always welcome.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Given the recent developments with VMware/Broadcom, we've noticed a surge in inquiries related to running Windows on Proxmox. To assist those transitioning away from VMware, we've thoroughly examined Windows Server 2022 storage controller compatibility, native driver support, storage performance, migration guidelines, and system efficiency using the latest hardware available.
We've got a lot of data, so we're releasing it in multipart form to keep it digestible.
Part 1: An Introduction to Supported Windows Storage Controllers, AIO Modes, and Efficiency Metrics
This initial technote establishes foundational system concepts for efficiency comparisons in upcoming releases. We delve into Proxmox storage controllers, Windows driver compatibility, Asynchronous I/O modes, and IOThreads. Additionally, we provide insights into our testing environment. You can find Part 1 here.
Part 2: Computational Efficiency of Storage Controller Configurations under Constrained IOPS Workload
Building upon the foundational knowledge laid out in Part 1, the second installment takes a deep dive into the computational efficiency of every possible storage controller configuration. We present raw efficiency data, taking into account CPU utilization and operating system context switches. Part 2 can be accessed here.
TLDR: Here are the key takeaways of the IOPS efficiency study in part 2:
For IOPS-intensive workloads operating in a Windows 2022 server guest with iSCSI shared storage:
- Virtio drivers provide up to 53% better CPU cycle efficiency compared to the best-performing native drivers included with Windows.
- Virtio drivers result in 81% fewer context switches compared to the best-performing native drivers included with Windows.
- virtio-scsi with aio=native is 84% more efficient than virtio-scsi with aio=io_uring in terms of CPU cycles and context switches.
- aio=threads is the least efficient model for asynchronous I/O for all storage controllers, resulting in the highest CPU cycle utilization and context switch rates.
- virtio-scsi with aio=native outperforms all virtio-blk configurations in terms of CPU cycle and context switch efficiency.
- An iothread has negligible efficiency overhead when used with virtio-scsi and aio=native.
- virtio-scsi with aio=native is the optimal configuration in terms of CPU cycle efficiency and context switch overhead.
1: https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-2.html
If you find this helpful, please let me know. More data and analysis to come. Questions, comments, and corrections are always welcome.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox