Hello,
I have a bit of a niche setup.
I have 8x NVMe drives in a custom-built computer and what I am trying to do is maximize processing performance by eliminating as many bottlenecks as I can.
These 8x NVMe drives are attached to the motherboard by PCIe. (4x NVMe disks in each PCIe ASUS Hyper RAID M2 card)
I have 3x Windows 10 Virtual machines, each running its own processing software and utilizing vGPU profiles for a boost in processing speed.
This software outputs the processed data into a shared E:\ volume shared by SMB to the virtual machines as a software RAID by a TrueNAS VM in the same proxmox host. This RAIDZ2 pool relates to the total capacity of the 8x 4TB NVMe disks. I get about 25TB total capacity out of this pool.
I don't want to use SMB sharing and I do not want to use software RAID any longer. I do not want any sharing via any networking protocol any longer if possible as the TrueNAS instance is bottlenecking the processing performance due to all VMs writing to the SMB share via it's 10Gbps vNIC. Furthermore, the TrueNAS instance is only able to have 2 CPU cores assigned to it. So this is a less than ideal set-up.
Each NVMe disk is a PCIe 4.0 disk with 7000MB/s/7,300MB/s read/write speeds which translates to ~56Gbps.
A 10Gbps vNIC simply will not cut it.
I instead want to implement a hardware RAID card via PCIe to offload RAID management and I/O to that card, thus eliminating the 10Gbps vNIC bottleneck and take any processing burden off the central CPU for software RAID management.
I found this RAID card called Broadcom MegaRAID 9560-16i: https://www.broadcom.com/products/storage/raid-controllers/megaraid-9560-16i
This RAID card can RAID together up to 32x NVMe disks using RAID-on-Chip technology. I want to install this RAID card into the motherboard and use this to present the NVMe RAID array to Proxmox.
Once I do this, my primary aim is to present the single hardware RAID pool to all 3x Windows 10 virtual machines so that they can access the same data and hopefully read/write the same data without going via any network sharing protocol or vNIC/software RAID/etc.
I'm not sure if I can create a single virtual disk for the entire hardware RAID pool and have all virtual machines share that same virtual disk? So far it doesn't seem like that could be possible?
Or otherwise I can create 3x virtual disks in that RAID pool - e.g. 10TB each and attach each separate virtual disk to each VM, but that means each VM will see their own separated volume, and furthermore, if I want to implement extra VMs in the future for processing, I would have to divide this entire RAID pool even further which will shrink storage further on each VM which is not something that can be afforded.
Another idea is to do a 1:1 passthrough of the RAID controller to one single VM and install the controller driver in that primary Windows guest. But then I believe I will still be limited to vNIC access if I want to share the volume to other VMs?
Also, it does not appear I can virtualize the RAID card with SR-IOV like I can with the GPU, so won't be able to attach virtual RAID controller functions to the VMs.
Here are the hardware specs of the build:
ASUS WRX80E SAGE WIFI II motherboard
AMD Threadripper 5975WX CPU
384GB DDR4 memory array (forgot the manufacturer)
NVIDIA RTX A5000 GPU
8x Kingston Fury NVMe disks
2x ASUS Hyper-RAID cards
The proxmox kernel version on this host is 6.5.13-6
Any ideas would be greatly appreciated!
I have a bit of a niche setup.
I have 8x NVMe drives in a custom-built computer and what I am trying to do is maximize processing performance by eliminating as many bottlenecks as I can.
These 8x NVMe drives are attached to the motherboard by PCIe. (4x NVMe disks in each PCIe ASUS Hyper RAID M2 card)
I have 3x Windows 10 Virtual machines, each running its own processing software and utilizing vGPU profiles for a boost in processing speed.
This software outputs the processed data into a shared E:\ volume shared by SMB to the virtual machines as a software RAID by a TrueNAS VM in the same proxmox host. This RAIDZ2 pool relates to the total capacity of the 8x 4TB NVMe disks. I get about 25TB total capacity out of this pool.
I don't want to use SMB sharing and I do not want to use software RAID any longer. I do not want any sharing via any networking protocol any longer if possible as the TrueNAS instance is bottlenecking the processing performance due to all VMs writing to the SMB share via it's 10Gbps vNIC. Furthermore, the TrueNAS instance is only able to have 2 CPU cores assigned to it. So this is a less than ideal set-up.
Each NVMe disk is a PCIe 4.0 disk with 7000MB/s/7,300MB/s read/write speeds which translates to ~56Gbps.
A 10Gbps vNIC simply will not cut it.
I instead want to implement a hardware RAID card via PCIe to offload RAID management and I/O to that card, thus eliminating the 10Gbps vNIC bottleneck and take any processing burden off the central CPU for software RAID management.
I found this RAID card called Broadcom MegaRAID 9560-16i: https://www.broadcom.com/products/storage/raid-controllers/megaraid-9560-16i
This RAID card can RAID together up to 32x NVMe disks using RAID-on-Chip technology. I want to install this RAID card into the motherboard and use this to present the NVMe RAID array to Proxmox.
Once I do this, my primary aim is to present the single hardware RAID pool to all 3x Windows 10 virtual machines so that they can access the same data and hopefully read/write the same data without going via any network sharing protocol or vNIC/software RAID/etc.
I'm not sure if I can create a single virtual disk for the entire hardware RAID pool and have all virtual machines share that same virtual disk? So far it doesn't seem like that could be possible?
Or otherwise I can create 3x virtual disks in that RAID pool - e.g. 10TB each and attach each separate virtual disk to each VM, but that means each VM will see their own separated volume, and furthermore, if I want to implement extra VMs in the future for processing, I would have to divide this entire RAID pool even further which will shrink storage further on each VM which is not something that can be afforded.
Another idea is to do a 1:1 passthrough of the RAID controller to one single VM and install the controller driver in that primary Windows guest. But then I believe I will still be limited to vNIC access if I want to share the volume to other VMs?
Also, it does not appear I can virtualize the RAID card with SR-IOV like I can with the GPU, so won't be able to attach virtual RAID controller functions to the VMs.
Here are the hardware specs of the build:
ASUS WRX80E SAGE WIFI II motherboard
AMD Threadripper 5975WX CPU
384GB DDR4 memory array (forgot the manufacturer)
NVIDIA RTX A5000 GPU
8x Kingston Fury NVMe disks
2x ASUS Hyper-RAID cards
The proxmox kernel version on this host is 6.5.13-6
Any ideas would be greatly appreciated!
Last edited: