Proxmox Hyperconverged Infrastructure - Storage Hardware

KyleS

New Member
Jan 21, 2025
3
1
1
Hello!
We are currently intending to migrate from vSphere to Proxmox, but still ensure that our infrastructure remains hyper converged. Reading over Proxmox VE documentation and forum posts, I can see that Ceph is the 'go-to' software solution for hyper-convergence, however, we just wanted to ascertain what hardware solutions people are using for their storage?

We currently use a SAN integrated with vSphere over FC SCSI which is approaching its best-before date, and whatever solution we move to, we are wanting to compromise as little as possible compared to our current set up.

What hardware/solution are people using for storage in their hyper converged set ups for Proxmox? What do you deem to be the 'golden standard'?

Thanks!
 
  • Like
Reactions: Johannes S
Currently the gold standard I would say is Ceph with datacenter NVMe SSD (Kioxia etc) and redundant 100G backend.

Ceph can be configured as 3-way mirror which is best for most performance needs. If you have large drives attached to VMs, you can do 3n+2k (or larger n) to save money, but storage is relatively cheap these days.

Not too expensive, plenty of headroom for most clusters. I would say 5 nodes is a minimum, just for redundancy, that way you can have 2 fail.

Then in another datacenter you should do Proxmox Backup Server.
 
Last edited:
  • SSDs or NVMe drives for high performance
  • SATA disks for larger capacity storage
SATA, SAS and NVMe are transport protocols, and SSD is the general storage type (different technologies also subcategorize this further, e.g. SLC, MLC, QLC etc. ... also features like PLC), so this is technically not 100% correct. NVMe is also SSD, and SSD works with all mentioned transport protocols: SATA, SAS and NVMe (ordered by throughput asc).

RAID controllers for data protection
Not for the CEPH / hyperconvergent storage!

While Proxmox manages virtualization and storage, organizations can enhance security by integrating advanced cybersecurity solutions from Sangfor and Fortinet.
Ah ... this is another SPAM post ... got it and report!
 
Ceph is the preferred software-defined storage for hyper-converged Proxmox clusters. You can have better performance and reliability than on a traditional SAN if you choose your hardware and configurations correctly.

Common production setup:
  • 3+ nodes (odd number for quorum)
  • Each node with:
  • Switch recommendations:
    • Use low-latency non-blocking L2/L3 switches with enough backplane bandwidth for your NICs
    • Ceph Loves Jumbo Frames. Enable jumbo frames (MTU 9000) on Ceph networks.
You should check the official Ceph's hardware recommendations for more information.

I'd also highly recommend reading the Proxmox VE Ceph Benchmark 2023/12. It covers real-world tests and shows:
  • 10 Gbps networking becomes a bottleneck quickly, even with just a single fast SSD per node.
  • 25 Gbps offers headroom, but network topology and tuning (e.g. routed vs. RSTP) matter.
  • 100 Gbps networks are fast enough so that the bottleneck shifts to the Ceph client.
    • Single client: ~6000 MiB/s write, ~7000 MiB/s read
    • Three clients: ~9800 MiB/s write, 19 500 MiB/s read
 
  • Like
Reactions: Johannes S