Trying Proxmox VE for the first time. Need help with the setup.

kartheek.kp

New Member
Mar 18, 2025
1
0
1
Hi ALL,

I have a requirement where i need to install & Configure a 3 node Proxmox Cluster with HCI. As part of this setup, I need to configure Ceph storage and enable High Availability (HA).

3*server Configuration: 2CPU - Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz (18cores) with 10 RAM (32GB) for each processor.
2* 1 TB HDD for RAID 1
14*1 TB HDD for RAID 6

How I can attain IOPS with this configuration. since we cannot afford to buy or add any SSD to this setup.
Is it possible to build a 3 node cluster with the above mentioned specifications ?


I would appreciate your guidance. Looking forward to your insights.
 
Last edited:
hi kartheek.kp

possible yes but without ssd's this setup will be slow, maybe too slow for your needs (IOPS)

with ceph you dont use raid, ceph uses the drives directly and creates a "raid array" over all hosts (simplified). the redundancy depends on how you configure ceph. but with hdd's you wont have a good time, ssd's are highly recommended.

first thing to check is if your servers raid adapter support hba mode or "passthrough" so that the os can access the drives directly without interference from the raid adapter.

next is to check if you have enough fast networking, multiple 25G and 10G ports would be recommended. segmentation of the traffic type is a good idea.

1x ceph internal (fast, redundant, 25G recommended, 10G minimum)
1x ceph external (fast, redundant, 25G recommended, 10G minimum)
2x corosync (2 ports, no bridge, no bond, different subnets, 1G)
1x vm traffic (redundant, depends on your needs, 10G recommended)
1x proxmox management

you should really read the documentation and the requirements for ceph

maybe someone with more ceph expereience can give you mor insights on this
 
  • Like
Reactions: kartheek.kp
@kartheek.kp
If your requirement is only HDD for storage you will struggle with IOPS...

However, If you can't afford SSD for main storage, I suggest considering if you can afford 1 or 2 SSDs (per host) to support your HDD storage. Even smaller drives can help a lot.

I've found in my own test rigs that small (2 - 3 node) Ceph installations can benefit massively from having one SSD to use for DB (or WAL, both if possible) for HDD backed RBD. If you're using CephFS, putting the MDS on an SSD and splitting up CephFS so that metadata is on SSD and data is on HDD can be a big boost.

This wont compete with SSD storage, but it will can make a large difference in IOPS in many worlkloads, and may make the performance tolerable for now.

Also, as @MarkusKo said, networking speeds are incredibly imporant to Ceph, but I would caution to perf test for your needs as most of these number are for SSD and NVMe drives. HDD doesn't bottleneck as quickly, though you have a lot of them so again, perf test for your needs.
 
Last edited:
  • Like
Reactions: gurubert