Passing RAID to Windows Server

Randell

Active Member
Dec 27, 2018
27
9
43
46
This is more of a question, not saying anything is wrong with proxmox.

I am able to pass thru an Areca 1882ix-24 to Windows 10 and use the latest Areca driver (6.20.00.33) without any issue.

If I attempt to pass this card thru to a Windows 2016 or even Windows 2019, I start having all kinds of problems. Server will think the volume is RAW, or it might want to initialize it.

This was an empty volume, so I went ahead and deleted the partition in Server and recreated it. When I attempt to format it, either NTFS or ReFS, it will fail the format.

I have tried Windows 2016 (feb 2018) without updates, 2016 with latest updates, Windows 2019 december, Windows 2019 with latest updates. I tried those with Areca .31 and .33 driver. I tried combinations of seabios and ovmf, with and without q35. Using a scsi boot disk, or even a SATA boot disk in the VM. Always errors in Server.

I switch over to Windows 10 and it works just fine.

The only way I got Server to work is to pass the disk (raid volume) and not the controller.

I cannot imagine that the Areca card has problems with Windows Server 2016/2019. The only thing I can think of is Server just doesn't like the card passed thru.

Has anyone ran into something like this?
 
You using an virtualization, why do you want to pass such VDs directly to the VM? Is it not enough to create an virtual disk in the array? So you do not add additional layers between.

But do you check if all disks are okay and the controller has no faults?
 
Everything works great passing the controller to Windows 10. It all falls apart passing the controller to Windows Server 2016/2019.

Inside the Windows 10 VM, I was able fill 75% of the new volume by copying data from a 2nd volume on the same controller. Inside Server VM, I can't even format the new volume. It is just a mess.

Maybe there is some sort of controller error that Windows 10 is just silently ignoring but Server is tripping over, but it seems very odd to me.

If I had a spare machine, I'd install Windows Server on metal and see if that *fixes* it.
 
If I had a spare machine, I'd install Windows Server on metal and see if that *fixes* it.
Again, why you need to pass it directly to windows? :)
In most cases it is enough to create an virtual disk (with PVE, not Controller BIOS) on it and let the node manage the hardware.
 
There isn't a *need* to pass it to windows. At least, as long as I use it as RAID and not JBOD. (You can only pass 15 scsi disks to a VM).

It works in Windows 10. It doesn't work in Windows Server. While the volume is empty right now, it isn't a big deal. Understanding the root of the problem so that I don't do something in the future when there is data on the volume is the biggest concern.
 
There isn't a *need* to pass it to windows. At least, as long as I use it as RAID and not JBOD. (You can only pass 15 scsi disks to a VM).
You do not need to pass more then 15 drives to the VM. Why you not create an RAID (ZFS, hardware or software), mount this big volume to proxmox as storage and then create an new disk for the VM on it. So this is the normal way how virtualization work, but yes, you can pass such devices, but in the most cases it isnt needed and add only an additional layer between which makes more problems as it solve and you have more work to maintain this.

Take a look if the controller is certified to run with the windows server versions. But there might be a problem if you passing the controller, so maybe there is a channel which not pass to the VM and this is the problem. Eventually windows server is a little diva :D
 
Potentially I might want to create 15+ drive storage spaces based pool. Then I'd need to pass the controller or hba (I'd just get the hba, would be silly to just use the expensive raid card for jbod). I thought about doing this, but I don't think I will. Yes, I know about ZFS, I just don't want to use it.

Areca 1882ix-24 is supported in Windows 2016/2019. There is a WHQL supported driver for 2016 (I tried 2016 and 2019).

Yes, it would appear Server is having some sort of issue with the passthru.

I will just pass the single raid volume as a disk to the VM instead of the controller. I am just uncomfortable not knowing the *why*.
 
Last edited:
Potentially I might want to create 15+ drive storage spaces based pool.
Okay, now it's clear for me. Yes, then there is no other way if you want to use the windows feature :)

Do you checked if the firmware of your server / controller is the newest? Otherwise might be an update would help you here.
 
Do you checked if the firmware of your server / controller is the newest? Otherwise might be an update would help you here.
I did. About the only thing I didn't try was downgrading the firmware.

Oh well. I'll just pass the volume. I am tired of dealing with it and that seems to work.
 
just for my curiosity, why do you need to forward the whole raid controller and the drives directly to the vm ? In 99 % of usage, you want your machine to be portable between servers and hypervizors, if you need to restore the VM to another hardware server/ hypervizor.
I've used in my cases virtIO storage and it's near native speed of the RAID controller.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!