Support for Windows failover clustering

khblenk

Member
Jul 2, 2020
11
1
8
46
I'm attempting to install a Windows Server 2022 Failover Cluster within my Proxmox environment to provide additional protection against operating system failures and to enable rolling updates for SQL Server.
I use a Fibre Channel SAN, with the LUNs presented on the Proxmox host.

The LUNs are mounted into the guest systems as follows (for example):
Code:
scsi2: /dev/mapper/sqlcluster-quorum,cache=none,shared=1,aio=native,discard=on,serial=QM000000005
scsihw: virtio-scsi-pci

Unfortunately, Windows does not accept the disks with the message (translated from German):
The physical disk ... does not support the Query Data (VPD Descriptor according to SCSI Page 83h) required for failover clustering. Status: The request is not supported.

The Qemu drivers and guest tools are installed on Windows.

Are there any settings that can be adjusted to support VPD? Are there any alternative methods of integration (other than iSCSI), such as virtual WWNs?
 
Hello. Have you found some solution for that ?
PS: i need to do the same, but with ceph and without a FC SAN.
 
Hi khblenk
Vital Product Data (i.e., VPD) page 0x83 is the "Device Identification" page (see the SCSI Primary Commands specification). It is a table that contains descriptors that describe the SCSI device and transport protocols applied to a logical unit.

Basic VPD support does not unlock support for Windows Failover Clusters. The next hurdle would be support for persistent reservations, which requires strict management of initiator port identification bindings.

Note that QEMU has enough functionality to do full passthough. If you are using SCSI-compliant storage, installing a Windows 2022 Failover cluster and passing storage validation is possible. As one data point, it works without issue with Proxmox on Blockbridge (using raw device paths). However, we natively support iSCSI, which simplifies things significantly.

I'm unsure how you would get FC storage to work easily in this application. Managing the combination of FC, multi-pathing, and persistent reservations is complicated.

@f.cuseo
Windows failover clusters require robust support for SCSI persistent reservations: CEPH, NFS, and LVM are not options. Emulated reservation support may be an option in some cases (perhaps using the discontinued CEPH iSCSI Gateway). However, given the complexity and risk, I would proceed with extreme caution.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I'm attempting to install a Windows Server 2022 Failover Cluster within my Proxmox environment to provide additional protection against operating system failures and to enable rolling updates for SQL Server.
I use a Fibre Channel SAN, with the LUNs presented on the Proxmox host.

The LUNs are mounted into the guest systems as follows (for example):
Code:
scsi2: /dev/mapper/sqlcluster-quorum,cache=none,shared=1,aio=native,discard=on,serial=QM000000005
scsihw: virtio-scsi-pci

Unfortunately, Windows does not accept the disks with the message (translated from German):
The physical disk ... does not support the Query Data (VPD Descriptor according to SCSI Page 83h) required for failover clustering. Status: The request is not supported.

The Qemu drivers and guest tools are installed on Windows.

Are there any settings that can be adjusted to support VPD? Are there any alternative methods of integration (other than iSCSI), such as virtual WWNs?

Hi!
Change the disk controller (scsihw) to the default(LSI ... ),
and do not use "discard=on" on disks, it will work.

I also do not recommend to use any virtio-* emulation for Windows.
 
Last edited:
Hi khblenk
Vital Product Data (i.e., VPD) page 0x83 is the "Device Identification" page (see the SCSI Primary Commands specification). It is a table that contains descriptors that describe the SCSI device and transport protocols applied to a logical unit.

Basic VPD support does not unlock support for Windows Failover Clusters. The next hurdle would be support for persistent reservations, which requires strict management of initiator port identification bindings.

Note that QEMU has enough functionality to do full passthough. If you are using SCSI-compliant storage, installing a Windows 2022 Failover cluster and passing storage validation is possible. As one data point, it works without issue with Proxmox on Blockbridge (using raw device paths). However, we natively support iSCSI, which simplifies things significantly.

I'm unsure how you would get FC storage to work easily in this application. Managing the combination of FC, multi-pathing, and persistent reservations is complicated.

@f.cuseo
Windows failover clusters require robust support for SCSI persistent reservations: CEPH, NFS, and LVM are not options. Emulated reservation support may be an option in some cases (perhaps using the discontinued CEPH iSCSI Gateway). However, given the complexity and risk, I would proceed with extreme caution.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you for your support. We don't use Windows Failover cluster directly, but we need to setup a cluster for one of our customers.
I will take a look at blockbridge, but i don't know if is possible to add servers and license costs :(
We only need less than 8Tbyte of usable space, but with the maximum availability, and better if we can backup the whole windows vms with PBS.
 
We don't use Windows Failover cluster directly, but we need to setup a cluster for one of our customers.
Understood. To clarify, I was speaking of running a nested Windows Failover cluster within PVE.

@emunt6

Thanks for your contribution. However, OP asked about Windows Server 2022. From what I understand, Windows no longer supports drivers for older LSI controllers. We covered this in our technote series on the Windows 2022 server: https://kb.blockbridge.com/technote...part-1.html#windows-server-2022-driver-status

Note that this technical series also covers the significance of Virtio's performance and efficiency gains.
I cannot attest to the correctness or completeness of SCSI passthrough on those older controllers. Regardless, OP must be confident in managing the SCSI initiator identities used for persistent reservations, as they are essential to correctness and integrity.

Lastly, it's worth clarifying that Windows cluster compliance validation test referenced in my original reply was run with virtio-scsi. So, we can confirm that it works!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: emunt6
Understood. To clarify, I was speaking of running a nested Windows Failover cluster within PVE.

@emunt6

Thanks for your contribution. However, OP asked about Windows Server 2022. From what I understand, Windows no longer supports drivers for older LSI controllers. We covered this in our technote series on the Windows 2022 server: https://kb.blockbridge.com/technote...part-1.html#windows-server-2022-driver-status

Note that this technical series also covers the significance of Virtio's performance and efficiency gains.
I cannot attest to the correctness or completeness of SCSI passthrough on those older controllers. Regardless, OP must be confident in managing the SCSI initiator identities used for persistent reservations, as they are essential to correctness and integrity.

Lastly, it's worth clarifying that Windows cluster compliance validation test referenced in my original reply was run with virtio-scsi. So, we can confirm that it works!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

There is another option using "VIRTIO-DISK(scsi) + Virtual-Disk(VDHX) + Microsoft SQL Cluster"
- VMs main disk is virtio-based scsi, after add VHDX disk-image on top, use this virtual-disk for cluster-disk.
( diskmanager > Action > Create VHD > VHDX )
Screenshot from 2024-05-04 11-29-10.png
- You don't need to install the "HyperV" role.
- You can resise the VHDX image with diskpart
Code:
diskpart
select vdisk file="Z:\Disk.vhdx"
expand vdisk maximum=<size in MB>
list volume
select volume <number>
extend
exit


This solution bypass the virtio problem "SCSI-3 Persistent Reservation" that is missing from Qemu, but it requires for MS SQL.
 
Last edited:
This solution bypass the virtio problem "SCSI-3 Persistent Reservation" that is missing from Qemu, but it requires for MS SQL.
If I understand your post correctly, this configuration moves your HA into application (SQL). In many cases this is the best approach, especially if the application is natively capable of this functionality.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
There is another option using "VIRTIO-DISK(scsi) + Virtual-Disk(VDHX) + Microsoft SQL Cluster"
- VMs main disk is virtio-based scsi, after add VHDX disk-image on top, use this virtual-disk for cluster-disk.
( diskmanager > Action > Create VHD > VHDX )
View attachment 67519
- You don't need to install the "HyperV" role.
- You can resise the VHDX image with diskpart
Code:
diskpart
select vdisk file="Z:\Disk.vhdx"
expand vdisk maximum=<size in MB>
list volume
select volume <number>
extend
exit


This solution bypass the virtio problem "SCSI-3 Persistent Reservation" that is missing from Qemu, but it requires for MS SQL.
Just a question: you create VHDX image on which "physical" drive ? Having 2 different windows servers, you need a shared unit drive ?
 
Just a question: you create VHDX image on which "physical" drive ? Having 2 different windows servers, you need a shared unit drive ?
I think this is the main question. How must the "shared" disk be created / configured in the configuration of the VMs? I am a total Proxmox beginner and I have just tried to use VirtIO SCSI with different Formats and Options but the storage will not be available in Windows Failover Cluster Manager. :-(
 
On Vmware it is straightforward to create a shared disk to build up an environment for Microsoft Failover Cluster. Simply create a vmdk (Diskfile) with Thick provision and add a SCSI Controller of type "VMware Paravirtual" to both VMs you want to use for the Cluster and connect the disk you created in the step before to this controller. Set the option "Shared Usage" to "Virtual" and you are done.

It would really nice if there would a similar solution for Proxmox. I know it is not really a solution for a productive environment but it allows for example update of the Windows servers and reboot one of them without interruption of the services.
 
There is another option using "VIRTIO-DISK(scsi) + Virtual-Disk(VDHX) + Microsoft SQL Cluster"
- VMs main disk is virtio-based scsi, after add VHDX disk-image on top, use this virtual-disk for cluster-disk.
( diskmanager > Action > Create VHD > VHDX )
View attachment 67519
- You don't need to install the "HyperV" role.
- You can resise the VHDX image with diskpart
Code:
diskpart
select vdisk file="Z:\Disk.vhdx"
expand vdisk maximum=<size in MB>
list volume
select volume <number>
extend
exit


This solution bypass the virtio problem "SCSI-3 Persistent Reservation" that is missing from Qemu, but it requires for MS SQL.
Hello emunt6.
Can you explain better all the steps needed and if can be used in a production environment ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!