Hi, I'm new to Proxmox but have some experience with the VMware line of products. I'm looking for advice on designing a beastly workstation that will run Debian / Proxmox and host a handful of VM's - mainly Windows and 1-2 flavors of Linux. If money was no object, what would you recommend as a storage solution for achieving fastest possible disk performance (both random & sequential) within VM's, while still maintaining abilities like snapshotting / backups (and if possible some kind of hardware redundancy like RAID1 or RAID10)?
The workstation will be used for a variety of different tasks with diverse I/O patterns in the VM's. I don't need to optimize for prolonged, high-queue-depth I/O (like you would see hosting a saturated database server), although I do still care about latency.
In case it helps, the motherboard I'm considering at present is an Asus Pro WS WRX90E-SAGE SE or ASRock WRX90 WS EVO, with a ThreadRipper Pro (e.g. 9985WX). The former supports four onboard M.2 drives (PCIe 5.0 x4) which could be populated with WD SN8100's. I'm also investigating the Areca ARC-1689-8N (w/ ARC-1689-CBM supercap module for power loss prevention) which accommodates up to eight M.2 drives and has been benchmarked at 60GB/s sequential, although I gather random performance takes a hit compared to a directly connected NVMe. (For what it's worth I've been using Areca controllers for a couple decades and am generally a fan, though I realize purists may recommend staying away from any sort of hardware RAID). I'm also open to more enterprise-centric drives (though do want to keep the storage contained to the same box so no NVMe-oF or other exotic externals).
I had conversations with the GRAID folks a few years ago and am open to having another look at their stuff as well. At the time it was early days, and I had some questions around resilience of their stack (e.g. there were some gaps in edge cases like power loss during a rebuild which I believe have since been addressed, and some other technical unknowns which would have boiled down to buying their hardware and testing by throwing various failure modes at it). I notice they posted some info and charts in relation to Proxmox, which seems encouraging. I'm not clear if their tech offers any advantage here if you're not doing parity.
Any other promising avenues I've overlooked?
And more importantly, is there any place I can find real-world storage benchmarks of high-performance Proxmox machines people have already built?
The workstation will be used for a variety of different tasks with diverse I/O patterns in the VM's. I don't need to optimize for prolonged, high-queue-depth I/O (like you would see hosting a saturated database server), although I do still care about latency.
In case it helps, the motherboard I'm considering at present is an Asus Pro WS WRX90E-SAGE SE or ASRock WRX90 WS EVO, with a ThreadRipper Pro (e.g. 9985WX). The former supports four onboard M.2 drives (PCIe 5.0 x4) which could be populated with WD SN8100's. I'm also investigating the Areca ARC-1689-8N (w/ ARC-1689-CBM supercap module for power loss prevention) which accommodates up to eight M.2 drives and has been benchmarked at 60GB/s sequential, although I gather random performance takes a hit compared to a directly connected NVMe. (For what it's worth I've been using Areca controllers for a couple decades and am generally a fan, though I realize purists may recommend staying away from any sort of hardware RAID). I'm also open to more enterprise-centric drives (though do want to keep the storage contained to the same box so no NVMe-oF or other exotic externals).
I had conversations with the GRAID folks a few years ago and am open to having another look at their stuff as well. At the time it was early days, and I had some questions around resilience of their stack (e.g. there were some gaps in edge cases like power loss during a rebuild which I believe have since been addressed, and some other technical unknowns which would have boiled down to buying their hardware and testing by throwing various failure modes at it). I notice they posted some info and charts in relation to Proxmox, which seems encouraging. I'm not clear if their tech offers any advantage here if you're not doing parity.
Any other promising avenues I've overlooked?
And more importantly, is there any place I can find real-world storage benchmarks of high-performance Proxmox machines people have already built?