Per https://www.proxmox.com/en/services/training-courses/videos/proxmox-virtual-environment/whats-new-in-proxmox-ve-9-1 for Windows Server 2025 VMs, you'll want to enable the nested-virt flag under Extra CPU Flags options.
Since Proxmox is Debian with an Ubuntu LTS kernel, it should work.
If it was me, I would just go straight to flash storage and skip it.
I do, however use the Intel Optane P1600X as a ZFS RAID-0 OS drive for Proxmox without issues.
If you plan on using shared storage, your officially Proxmox supported options are Ceph & ZFS (they do NOT work with RAID controllers like the Dell PERC).
Both require an IT/HBA-mode controller. I use a Dell HBA330 in production with no issues.
Technically, you do not if this is a home lab, which I am guessing it is.
Now, it is considered best production practice to separate the various network into their own VLANs especially with Corosync with it's own isolated network switches...
Better off with a Dell HBA330. It's a LSI 3008 IT-mode controller chip anyhow. Just make sure to update the firmware to lastest version at dell.com/support
As was mentioned, getting new drive is "nice" but not really required.
With a reputable enterprise flash drive, getting it used is fine. I have used 5-year-old Intel enterprise SSDs and they still show 100% life.
At home, I use Intel Optane...
This thread at https://forums.servethehome.com/index.php?threads/enterprise-ssd-small-deals.48343 has a list of enterprise SSDs for sale.
Another option is getting a server that supports U.2/U.3 flash drives. They can be surprisingly cheaper...
I compile this utility to format SAS drives to 512-bytes or do a low-level format https://github.com/ahouston/setblocksize
You'll need to install the sg3_utils package.
While doing that, might as install sdparm package and enable the write...
As I mentioned before, so much drama with PERC controllers in HBA-mode. Just swap them out for a Dell HBA330 true IT-mode storage controller. Your future self will thank you. They are cheap to get.
I use Dell Intel X550 rNDC in production without issues. Both the 2x1GbE-2x10GbE and 4x10GbE versions.
The 10GbE uses the ixgbe driver and the 1GbE uses the igb driver.
Use 'dmesg -t' to confirm. Obviously flash the rNDC to the latest firmware...
It's these one-off situations with the megaraid_sas driver and just installing a Dell HBA330 using the much simpler mpt3sas driver will avoid all this drama. LOL.
In addition, the Dell HBA330 is very cheap to get.
While it's true 3-nodes is the minimum for a Ceph cluster, you can only lose 1 node before losing quorum. You'll really want 5-nodes. Ceph is a scale-out solution. More nodes/OSDs = more IOPS. Can lose 2 nodes and still have quorum.
While...
You need to really confirm that write cache enable is turned on via 'dmesg -t' output on each drive. If the write/read cache is disabled, it really kills the IOPS.
While technically 3-nodes is indeed the bare minimum for Ceph, I don't consider...
Been migrating 13G Dells VMware vSphere clusters over to Proxmox Ceph using SAS drives and 10GbE networking on isolated switches. Ceph is scale out solution, so more nodes = more IOPS. Not hurting for IOPS on 5-, 7-, 9-, 11-node clusters. Just...
Disclaimer: I do NOT use ZFS shared storage in production. I use Ceph for shared storage. I do use ZFS on standalone servers. I ZFS RAID-1 to mirror Proxmox on small 76GB SAS drives.
With that being said, your best bet is ZFS over iSCSI per...