Recommended hardware for modest upgrade of 3 PVE nodes

maxim.webster

Active Member
Nov 12, 2024
283
139
43
Germany
Dear all,

I am running PVE in a what i consider being a rather typical "IT nerd" setup, a 3-node-Cluster on consumer hardware. Private use only.
  • 2 nodes are built identically: AMD 3200G (2C/4T) APU, 32GB DDR4 RAM, M2 NVMe for OS, 2x4TB HDD for storage (connected to SATA onboard), GbE NIC
  • 1 node is rather new: Intel N100 (4C/4T) APU, 32GB DDR5 RAM, M2 NVMe for OS, 2x4TB HDD for storage (connected via M2-2-SATA-Adapter), GbE NIC
  • all mainboards are Mini-ITX, the enclosure is an 1.5U 19" case from Intertec (1528L)
The cluster is running solid, CPU usage is not an issue, RAM is fairly allocated, ZFS HA replication for a VM fails 1-2 times a month for "timeout" reasons.

But I am especially missing two things:
  1. more cores to run Ceph as shared storage
  2. a second or more NICs to run Ceph and seperate VM traffic from cluster traffic
I am looking for hardware recommendations that fit the housing scenario and prvovide an upgrade for a reasonable price - of course - , as I am planning to replace all 3 nodes.

Criteria:
  • 4C/8T (or better) APU, on-board preferred
  • 32GB RAM support or better (ECC/non-ECC is not a topic)
  • at least 1x NVMe M2 slot and 2x SATA 6G sockets or 2x NVMe M2 Slots (as I am running an M2-2-SATA adapter in one node already)
  • at least 2x GbE or 2x 2.5GbE NICs
Constraints:
  • Mini ITX form factor
  • 1.5U Case, so no space for "Tower" CPU coolers (40mm height max)
  • support for Flex-ATX Power supply, so no external power "brick"

To give you an idea: The CWWK M11 looks quite suitable and matches the criteria above, but the "Engineering Sample" CPU is kind of weird. On the opposite site, the Minisforum BD775i is way overpowered on the CPU side, but lacks a 2nd NIC. I could use a PCIe adapter card, though.

What are your thoughts?
 
Last edited:
For CEPH homelab you only need 2.5g,5g or at most 10g. So only calculate that, the cores and everything isnt that important for the homelab.
 
Hello from Germany,
you can pic these Parts over the Geizhals.de page:

# https://geizhals.de/wishlists/4970670

My recommendations for a AMD Ryzen 5700G System:
  • Mainboard: ASRock B550M Pro4
  • CPU: AMD Ryzen 7 5700G (agpu), PCIe 3.0, no ECC
  • CPU cooler: be quiet! Pure Rock Slim 2 or Pure Rock Slim 3
  • Ram: G.Skill 32 GByte DDR4
  • 2x 10 G NIC: StarTech LAN-Adapter, 2x RJ-45, PCIe 3.0 x4
  • 4x PCIe 3.0 x4 NVMe: DELOCK 90210 - PCIe x16 > 4x M.2 NVMe, Bifurcation
  • Chieftec Mesh CT-01B, schwarz (Alternate.de)
  • 2x 120 mm PWM
  • PSU: be quiet! System Power 11 450W
  • some more parts
I know you, will use the older parts, but what about the heat output and the need space for drives?

And the ASRock B550M Pro4 has 4x PCIe 3.0 x4 Bifurcation on the main PCIe 3.0 x16 slot, PCIe 3.0 x1 and PCIe 3.0 x4 for the 10 G NIC!
On my systems i use the SilverStone ECL01 2.5G PCIe card.
 
Hello from Germany,
you can pic these Parts over the Geizhals.de page:

# https://geizhals.de/wishlists/4970670

My recommendations for a AMD Ryzen 5700G System:
  • Mainboard: ASRock B550M Pro4
  • CPU: AMD Ryzen 7 5700G (agpu), PCIe 3.0, no ECC
  • CPU cooler: be quiet! Pure Rock Slim 2 or Pure Rock Slim 3
  • Ram: G.Skill 32 GByte DDR4
  • 2x 10 G NIC: StarTech LAN-Adapter, 2x RJ-45, PCIe 3.0 x4
  • 4x PCIe 3.0 x4 NVMe: DELOCK 90210 - PCIe x16 > 4x M.2 NVMe, Bifurcation
  • Chieftec Mesh CT-01B, schwarz (Alternate.de)
  • 2x 120 mm PWM
  • PSU: be quiet! System Power 11 450W
  • some more parts
I know you, will use the older parts, but what about the heat output and the need space for drives?

And the ASRock B550M Pro4 has 4x PCIe 3.0 x4 Bifurcation on the main PCIe 3.0 x16 slot, PCIe 3.0 x1 and PCIe 3.0 x4 for the 10 G NIC!
On my systems i use the SilverStone ECL01 2.5G PCIe card.

Danke, aber -

the board is Micro-ATX and the CPU cooler will not fit inside the case. The case however is mandatory, as the 19“ rack is quite short.

I could do an CPU upgrade on the two AMD boards as the mainboard and the (low-profile) cooler support Ryzen 5 5600G. But still have the single NIC challenge.
 
  • Like
Reactions: Johannes S
You coulr reuse one or two of your old NUCs as a combined ProxmoxBackupServer/qdevice. If you can live without Ceph using ZFS storage replication and using some USB nics as dedicated corosync network would also be a possiblity.

Or seperating the cluster in multiple single-nodes and use a combination of pve-zsync and the datacenter manager.
 
Last edited:
Personally if you can tolerate a non-embedded CPU, I would go with the Gigabyte B550I Aorus Pro AX mini ITX motherboard. It has 2 NVME slots, a X16 PCI slot that supports bifurcation into one x8 device and two x4 devices. I own this board, and I use a PCI bifurcation adapter (https://www.amazon.com/dp/B0BMWTRY9T), and run a 10gbe network card, and two x4 NVME drives, bringing the total NVME drives to four. That board also supports either ECC or Non-ECC memory with the right CPU. I run a Ryzen 5 Pro 5650GE in mine. 35 watt TDP, easy to keep cool. 6 cores/12 threads. Plenty of horsepower. If you don't want ECC memory you can go with a Ryzen 5 5600G. The Gigabyte board also has 4 SATA ports, and 2.5gbe networking. By the way, this is the same board the 45 Drives people chose for their HL4/HL8 servers. Its good stuff. The only downside for you is only one onboard NIC. So if you want another, you will have to add one somehow...USB NIC, PCI NIC, or even a M.2 to NIC adapter that could give you 2.5gbe, 5gbe, or even 10gbe networking. Something like this https://www.newegg.com/p/1DK-013R-00BY0
 
You coulr reuse one or two of your old old NICs as a combined ProxmoxBackupServer/qdevice. If you can live without Ceph using ZFS storage replication and using some USB nics as dedicated corosync network would also be a possiblity.

Or seperating the cluster in multiple single-nodes and use a combination of pve-zsync and the datacenter manager.
Sorry, NIC not NUC. Every node has only 1 network interface card. And I do actually use the HA fearure, so switching to single nodes in a datacenter is AFAIK not an option.

Using USB network adapters is a good point, even the older AMD boards should have USB3.
 
  • Like
Reactions: Johannes S
Personally if you can tolerate a non-embedded CPU, I would go with the Gigabyte B550I Aorus Pro AX mini ITX motherboard. It has 2 NVME slots, a X16 PCI slot that supports bifurcation into one x8 device and two x4 devices. I own this board, and I use a PCI bifurcation adapter (https://www.amazon.com/dp/B0BMWTRY9T), and run a 10gbe network card, and two x4 NVME drives, bringing the total NVME drives to four. That board also supports either ECC or Non-ECC memory with the right CPU. I run a Ryzen 5 Pro 5650GE in mine. 35 watt TDP, easy to keep cool. 6 cores/12 threads. Plenty of horsepower. If you don't want ECC memory you can go with a Ryzen 5 5600G. The Gigabyte board also has 4 SATA ports, and 2.5gbe networking. By the way, this is the same board the 45 Drives people chose for their HL4/HL8 servers. Its good stuff. The only downside for you is only one onboard NIC. So if you want another, you will have to add one somehow...USB NIC, PCI NIC, or even a M.2 to NIC adapter that could give you 2.5gbe, 5gbe, or even 10gbe networking. Something like this https://www.newegg.com/p/1DK-013R-00BY0
Thanks, appreciate your support. The board looks promising, although i pay for features i won‘t use (wifi). I could even continue to use my DDR4 memory.

Nobody commented on the CWWK or other „NAS“ boards, so far …
 
  • Like
Reactions: Johannes S
Actually, the Gigabyte board or any „more modern“ AMD board with 2 NVMe and (at least) 2 SATA ports seems valid.

I could keep NVMe and HDD of all existing nodes, as well as RAM and CPU cooler of two nodes.

I would have to buy 3x AMD 5600G, another 32GB DDR4 RAM and 3x M2-to-Network-interfaces.

Could also be an option to use PCIe network adapters, but - due to the limited height of the case - this would require riser cards or cables.
 
  • Like
Reactions: Johannes S
Put in the bifurcation adapter and use the two X4 slots only. That should fit in your case. That would give you three NVME drives and a slot to use for a NIC.

I am not a fan of of the Minisforum board, as it seems over priced to me. And the CWWK board might be OK, but if that CPU cooler and fan are too loud, I am not sure you have an upgrade path.

Yes, with the Gigabyte board there are features you may not use, but I have to believe there are enough of these boards on the used market to eliminate that concern (i.e., by not paying full price). If you are going mITX, DEFINITELY give preference to boards that support bifurcation. It is super useful.
 
  • Like
Reactions: Johannes S