Suggestion about hardware

porkyHTTP

New Member
Aug 28, 2025
6
0
1
Hello to all,
I would ask all, how can I use these hardware that I have.
Actually it is used for ESXi and other real server.
My goal is to reuse all of these and destinate to Proxmox to realize a cluser where i can migrate machines quickly in case af necessity


PowerEdge R320
Intel(R) Xeon(R) CPU E5-2407 v2 @ 2.40GHz
24 GB RAM
2 x 500GB HDD

PowerEdge R210
Intel(R) Xeon(R) CPU X3470 @ 2.93GHz
8 GB RAM
2 x 500GB HDD

PowerEdge R210
Intel(R) Xeon(R) CPU X3470 @ 2.93GHz
8 GB RAM
2 x 500GB HDD

PowerEdge R330
Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz
32 GB RAM
2 x 1TB HDD

PowerEdge R6515
AMD EPYC 7302P 16-Core Processor
32 GB RAM
2 x 4TB HDD

PowerEdge R320
Intel(R) Xeon(R) CPU E5-2420 v2 @ 2.20GHz
32 GB RAM
2 x 1TB HDD

PowerEdge R320
Intel(R) Xeon(R) CPU E5-2420 v2 @ 2.20GHz
32 GB RAM
2 x 1TB HDD
 
What do you need exactly? That is some ancient hardware, but you can still run Proxmox on them, use Ceph and be on your way. You’ll need more RAM to actually run a modern VM on some of those and spinning disks aren’t going to give you great performance so just dual gigabit LACP should be sufficient. The only outlier is the AMD system, you can’t live-migrate between AMD and Intel and software/Windows guests may not like switching either.

The primary issue you’ll be running in is memory constraints, but as long as you stick to Linux guests and reasonable demands, this is feasible.
 
  • Like
Reactions: UdoB
I am with @guruevi ; for the AMD system I would recommend to add some more disks (HDDs plus two small but high quality SSDs for a Special Device) and install a PBS. Backups from the beginning are often an oversight...
 
Yes the hardware is not new but it works. Actually I have VM that run mainly LAMP systems for websites or management system or for survey, only one machine win win 7 32bit for a specifici software.
 
I can dedicate one LAN interface of aech server for vm access and the other one for proces like CEPH. But if I undestand correctly CEPH need tha each machine in the cluster have same disk size, but thi is not my case
 
I have 1GB interfaces, If disk are different how can CEPH store all VMs in aech node? It will be locked at minumun disk size?
 
I have 1GB interfaces,
For learning Ceph this is fine.

If disk are different how can CEPH store all VMs in aech node? It will be locked at minumun disk size?
Ceph is a complex beast and the actual behavior depends on several details.

If there are enough nodes available... nothing happens if one node has a (single?) smaller OSD. If you activate only three nodes and use the default rules (size=3) the whole system gets degraded and needs manual intervention to get healthy again. In this specific situation the pool should stay writable and all VMs keep running.

Note that you should have some more nodes than the minimum and each and every node should have several OSDs. See my link...
 
As said above, spinning disks will be fine over gigabit, I would suggest putting them in LACP, that way you get 2G, no need to reserve a full link for guest traffic, rather use QoS w/VLANs on the switch if you’re running out of bandwidth (unlikely on this era machine).

As far as minimum disk sizes etc, your largest drives won’t be used fully necessarily but they can be of mixed size. In a larger cluster, it will try to balance/weight them according to total size of the disk so larger disks will see more use in larger cluster, if n=3, and you have 4 nodes with 500G and 1 with 1T disks, the first block would be distributed to the 2 500G nodes and the 1T node, the second block to the 2 other 500G nodes and the 1T node. The 1T disk will be the “third” copy twice as often as any of the other nodes, meaning double the traffic and the likely point of contention, although in most cases the higher capacity also means newer hardware. So in the end the fill rate will be roughly equivalent across the nodes. If there are huge discrepancies, you could move things around and have OSD of different sizes in each node.
 
Last edited:
i received the R710 but i am in doubt how to configure disks, if leave they as single units or create a raid volume. i think that proxmox system take few gb of space and it is not necessary use a 1TB disk, the rest of space will be lost and not usable for vms
 
i received the R710 but i am in doubt how to configure disks, if leave they as single units or create a raid volume.
Rules of thumb: if you want to use Ceph or ZFS then direct access to the drives is highly recommended - I would say: required. Check the BIOS of the Controller if it has an "HBA"-mode; if not: check if you can flash it into "IT"-mode.

If both is impossible then I would never use it in production - with Ceph or ZFS.

Sidenote: flashing Raid-Controllers in such servers may have disadvantages. For example iDRAC can not show the state of the physical drives anymore...
 
Last edited: