Hey all,
I have to size a PVE/CEPH environment for two data center.
We need a new home for roughly 300 small VMs (4 cores, 4GB Memory,100-200GB storage)
I calculate half a year until all 300 VMs are migrated and calculated 100% growth in the next three years.
Storage bandwidth should not be less than a single local spinner for each VM.
One more thing: I have to rely on HP hardware.
Based on these requirements, I sized the following three types of servers:
PVE
------
DL360Gen10
2x Xeon 6130 (16core)
512 MB Memory
2x 240 GB SATA M.2 mixed r/w
2x 10GE Ethernet
OSD
-------
DL380Gen10
1x Xeon 6130 (16core)
64 GB Memory
2x 240 GB SATA M.2 mixed r/w
24x 2,4 TB SAS SFF HDDs
2x NVMe SSD
2x 10GE Ethernet
BBU
MON/MDS
---------------
DL20Gen10
1x Xeon E2124
16GB Memory
2x 240 GB SATA M.2 mixed r/w
For each DC, I would take:
4x PVE, 5x OSD 3x MON/MDS
Each DC gets it's own independant cluster
One ceph pool to store the VM disks
One cephFS to store backups from the other DC.
In the worst case scenario of a complete DC outage, I have to manually (or via API script) restore the missing VMs.
My open topics so far are:
1. Will Proxmox support the hardware (DL20 includes an S100i controller)? I hope in HBA mode it will work.
2. Which caching strategy shall I use?
3. Since both clusters are independant, how to aviod duplicated VM ids in case of restore?
Every comments, ideas, recommendations, concerns about sich approch or to my open questions are highly apprechiated.
Cheers,
Martin
I have to size a PVE/CEPH environment for two data center.
We need a new home for roughly 300 small VMs (4 cores, 4GB Memory,100-200GB storage)
I calculate half a year until all 300 VMs are migrated and calculated 100% growth in the next three years.
Storage bandwidth should not be less than a single local spinner for each VM.
One more thing: I have to rely on HP hardware.
Based on these requirements, I sized the following three types of servers:
PVE
------
DL360Gen10
2x Xeon 6130 (16core)
512 MB Memory
2x 240 GB SATA M.2 mixed r/w
2x 10GE Ethernet
OSD
-------
DL380Gen10
1x Xeon 6130 (16core)
64 GB Memory
2x 240 GB SATA M.2 mixed r/w
24x 2,4 TB SAS SFF HDDs
2x NVMe SSD
2x 10GE Ethernet
BBU
MON/MDS
---------------
DL20Gen10
1x Xeon E2124
16GB Memory
2x 240 GB SATA M.2 mixed r/w
For each DC, I would take:
4x PVE, 5x OSD 3x MON/MDS
Each DC gets it's own independant cluster
One ceph pool to store the VM disks
One cephFS to store backups from the other DC.
In the worst case scenario of a complete DC outage, I have to manually (or via API script) restore the missing VMs.
My open topics so far are:
1. Will Proxmox support the hardware (DL20 includes an S100i controller)? I hope in HBA mode it will work.
2. Which caching strategy shall I use?
3. Since both clusters are independant, how to aviod duplicated VM ids in case of restore?
Every comments, ideas, recommendations, concerns about sich approch or to my open questions are highly apprechiated.
Cheers,
Martin
Last edited: