iadityaharsh

Member
Jun 21, 2022
11
0
6
I am having a dilemma with my server configuration, namely I have 2 servers running at Home.

Problem Statement:
I am considering to setup 3 node Proxmox HA Cluster with CEPH considering all 3 nodes with same specs as Server 1 (given below). If I were to do this, what kind of performance gains I'm looking at or any performance drops (if say any). In any case My TrueNAS Server will remain intact and act as my primary backup destination.


My Workloads:
1. 1x Windows Server 2022 VM (6 cores, 16GB RAM)
2. 6x Windows 10 Pro VM (each 4 cores, 4GB RAM)
3. 4x Windows 11 Pro VM (each 4 cores, 8GB RAM)
4. 4x Ubuntu Server 22.04.4 LTS (each 2 cores, 4GB RAM) (all running docker containers)

Server 1 (Proxmox VE Node) inside Toploong TP2U430-06:
AMD Epyc 75F3 (32 Cores @2.95GHz)
Supermicro H12SSL-NT (on-board 10Gbe NIC)
Micron 128GB DDR4 3200MT/s RDIMM
2x Gigabyte Aorus 500GB M.2 (For Proxmox OS Drives in ZFS Mirror)
2x Micron 7450Pro 1920GB (For VM Storage in ZFS Mirror)
PCIe Devices:
1. NVIDIA Mellanox MCX4121A-ACAT
2. Intel X710-DA2


Server 2 (TrueNAS Scale) inside Chenbro RM41300-G:
AMD Epyc 7542 (32 Cores @2.9GHz)
Supermicro H12SSL-I (on-board 1Gbe NIC)
Micron 128GB DDR4 3200MT/s RDIMM
2x Silicon Power 256GB M.2 Gen-3 (For TrueNAS OS Drives in ZFS Mirror)
8x Samsung 980Pro 2TB M.2 Gen4 (4x ZFS Mirror)
2x Silicon Power 1TB M.2 Gen-3 (ZFS Mirror)
6x Seagate Exos X20 14TB HDD (ZFS RAID-Z2)
PCIe Devices:
1. 2x Asus Hyper x16 Gen-4 NVMe Carrier Card
2. 1x Asus Hyper x16 Gen-3 NVMe Carrier Card
3. NVIDIA Mellanox MCX4121A-ACAT


P.S.: I'm planning on getting 2x Intel Flex 140 for the Proxmox Node and might order 3x if I want to setup HA Cluster.
 
2x Gigabyte Aorus 500GB M.2 (For Proxmox OS Drives in ZFS Mirror)
don't use consumer m2 ssd/nvme. You are going to burn them. (and with ceph, you'll have ceph monitor writing on them)

you can use kingston SSD DC1000B m2 for example


With ceph, you'll have a tittle bit more latency than with local disk. I'm around 0.125ms for read && 1ms for write.
 
don't use consumer m2 ssd/nvme. You are going to burn them. (and with ceph, you'll have ceph monitor writing on them)
Yes, that might be on the new upgrade, and that's I'm using ZFS so that it's easy to replace. Running this Server since January 2023 and Proxmox shows 19% Wearout.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!