Windows file server guest - best practices for data disks

Jun 9, 2025
2
0
1
Hi,

So need to move my legacy file servers (one bare-metal, one on an old ESXi node) to ProxMox and am looking for advice on the best way to configure. Note that I am not migrating the old servers but building new ones and moving the data.

I've pondered 2 different scenarios for disk config/connectivity - one where the data disks (i.e. non-OS disk) are virtualized and one where the data disks are iSCSI targets for the guest. I've done both scenarios with the old build; the bare-metal server has an iSCSI-connected data disk and the ESXi node has virtualized disks. Both options work though the iSCSI disk has better performance overall, so this is kind of the way I am currently leaning but wanted to poll the group for thoughts, concerns, experiences, etc.

Thanks - Ed
 
Hi Ed, welcome to the forum.

We've done a deep dive on Windows performance in our 4-part series here : https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-1.html
You may find it interesting.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi, thanks for that...so if I understand correctly, in the context of systems with virtualized disks hosted on iSCSI-enabled storage the use of virtio-scsi, aio=native, and iothread gives the overall best performance. How does this compare to a config where the OS disk of a Windows 2022 file server is a virtual disk and the data disk is an iSCSI-attached, non-virtualized LUN on a SAN?

In my legacy build (the one I'm migrating away from) the server on the ESXi node has both its OS disk and data disk virtualized (DAS - RAID6 SAS array). The other server has a bare-metal install of Windows (older build - 2016 I believe) and has an iSCSI connection to a QNAP server configured as an iSCSI SAN. I don't have performance markers between the two systems but anecdotally, the server w/ the iSCSI drive feels faster than the ESXi vm w/ DAS-based virtual disks. Am I correct in my above assumption that the disks for your test system are virtualized? Also, are the drives for the test server split across different systems (ex. OS drive local to pve node and data disk on the Blockbridge appliance, etc.) or are they all on the Blockbridge appliance?

Thanks again for info - it is appreciated.
 
in the context of systems with virtualized disks hosted on iSCSI-enabled storage the use of virtio-scsi, aio=native, and iothread gives the overall best performance.
The study was oriented towards fully virtualized guests, and you are correct in your understanding.

How does this compare to a config where the OS disk of a Windows 2022 file server is a virtual disk and the data disk is an iSCSI-attached, non-virtualized LUN on a SAN?
A directly connected iSCSI disk depends on the VM NIC optimization, introducing an entirely different path for IO. We'd expect it to work fine, but have not measured or looked into optimizing it.

Am I correct in my above assumption that the disks for your test system are virtualized?
yes, all disks were presented via PVE virtualization layer.
Also, are the drives for the test server split across different systems (ex. OS drive local to pve node and data disk on the Blockbridge appliance, etc.) or are they all on the Blockbridge appliance?
With Blockbridge, each PVE virtual disk has a dedicated counterpart VDISK , along with dedicated iSCSI Target. So there is no contention between OS and Data disks.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox