Hi everybody,
we just decided to build a new virtualization environment based on three PVE+Ceph nodes.
At the moment we're running about 50 VMs (Windows and Linux servers) which have assigned 192 vCPU cores, 377 GB RAM and 12 TB allocated storage, of which 8,3 TB are really in use.
We like to setup a as-far-as-possible-standard installation of PVE and Ceph. The first two steps I have in mind is 1) planing the server hardware and 2) planing the networking around these servers.
1) Server Hardware
Because we are very happy with Thomas Krenn for years, we would like to buy these servers there. Here's my first shot (per machine):
** 2HE AMD Dual-CPU RA2224 Server **
CPUs: 2x AMD EPYC 7351
RAM: 256 GB (4x 64 GB) ECC Reg DDR4 2666
SSDs (OS and software): 2x 240 GB Samsung SM883
SSDs (Ceph): 7x 1,92 TB Samsung SM883 (each of them OSD+Journal for Ceph storage)
1x Broadcom HBA 9300-8i
2x Intel 10 Gigabit X710-DA2 SFP+
Proxmox standard subscriptions
5 years essential hardware support
notes:
a) Why AMD? Because when Goliath fights against David, we're on David's side. Are there great reasons to use Intel anyway?
b) SSDs: What about SM883 vs. PM883? Or something completely different?
2) Networking
The three nodes will live in three different rooms which are connected via OM3 fibre connections (R1 <-107 meters-> R2 <-145 meters-> R3)
In every room there are switch virtual chassis based on Juniper EX3300 switches (also connected to each other via 10 GBit/s using the connections mentioned above). Everywhere there are at least four free 10 GBit/s ports.
So in my opinion the "obvious" way is:
1x 10 GBit/s for the "VM network"
1-2x 10 GBit/s for Ceph
1x 10 GBit/s for live migration
Another wild idea could be building a direct mesh for Ceph using more than 10 GBit/s. Here the question is: What is possible with 145 meters OM3: 25 GBit/s? 40 GBit/s? even more?
That's it for the moment. I would be very glad about your comments on this project, and I promise you to run every performance test you like to see in this environment when it's built!
Thanks and many greets from Germany
Stephan
we just decided to build a new virtualization environment based on three PVE+Ceph nodes.
At the moment we're running about 50 VMs (Windows and Linux servers) which have assigned 192 vCPU cores, 377 GB RAM and 12 TB allocated storage, of which 8,3 TB are really in use.
We like to setup a as-far-as-possible-standard installation of PVE and Ceph. The first two steps I have in mind is 1) planing the server hardware and 2) planing the networking around these servers.
1) Server Hardware
Because we are very happy with Thomas Krenn for years, we would like to buy these servers there. Here's my first shot (per machine):
** 2HE AMD Dual-CPU RA2224 Server **
CPUs: 2x AMD EPYC 7351
RAM: 256 GB (4x 64 GB) ECC Reg DDR4 2666
SSDs (OS and software): 2x 240 GB Samsung SM883
SSDs (Ceph): 7x 1,92 TB Samsung SM883 (each of them OSD+Journal for Ceph storage)
1x Broadcom HBA 9300-8i
2x Intel 10 Gigabit X710-DA2 SFP+
Proxmox standard subscriptions
5 years essential hardware support
notes:
a) Why AMD? Because when Goliath fights against David, we're on David's side. Are there great reasons to use Intel anyway?
b) SSDs: What about SM883 vs. PM883? Or something completely different?
2) Networking
The three nodes will live in three different rooms which are connected via OM3 fibre connections (R1 <-107 meters-> R2 <-145 meters-> R3)
In every room there are switch virtual chassis based on Juniper EX3300 switches (also connected to each other via 10 GBit/s using the connections mentioned above). Everywhere there are at least four free 10 GBit/s ports.
So in my opinion the "obvious" way is:
1x 10 GBit/s for the "VM network"
1-2x 10 GBit/s for Ceph
1x 10 GBit/s for live migration
Another wild idea could be building a direct mesh for Ceph using more than 10 GBit/s. Here the question is: What is possible with 145 meters OM3: 25 GBit/s? 40 GBit/s? even more?
That's it for the moment. I would be very glad about your comments on this project, and I promise you to run every performance test you like to see in this environment when it's built!
Thanks and many greets from Germany
Stephan
Last edited: