Enterprise Hypervisors Architecture

Shazim

New Member
May 17, 2024
2
0
1
I recently started working as a sys admin in a new company. The infrastructure is a mess, they are using a libvirt KVM hypervisor solution without an interface, and the storage for the servers is specific to the machines that host the VMs. Everything is outdated, there is no high availability, and I am struggling to manage it.

I would like to modernize everything. I have 3 server rooms that are connected to each other with 40Gb/s fiber. I have managed to get hold of 6 servers, 2 sets of 3:

List item

  • 3 HPE Proliant DL380 Gen10 (16c/32t, RAM 128GB, HDD 44TB)
  • 3 IBM SR650 (32c/64t, RAM 512GB, HDD 6.6TB)
I would like to present, at a low cost, a modern solution that will serve as a model for what we could have in the future. In other words, I would like to show my colleagues what I would like to aim for. For now, I am not talking about power, storage, or anything else. The goal is simply to make it work.

I would like to have high availability between my 3 server rooms. I was thinking of using my 3 IBM servers as compute hypervisors and my 3 HPE hypervisors as SAN storage.

I have considered using the PROXMOX VE solution.

However, I am not an architect, and I am having trouble framing my project. Does my solution make sense? How do I do SAN? Is my 40GB connection between the rooms sufficient?

If anyone could shed some light, it would be much appreciated.

Thanks.
 
  • 3 HPE Proliant DL380 Gen10 (16c/32t, RAM 128GB, HDD 44TB)
  • 3 IBM SR650 (32c/64t, RAM 512GB, HDD 6.6TB)
Hi @Shazim , you seem to have a solid initial plan and a good DC infrastructure to do a proof of concept. 40Gbit is more than sufficient for stable cluster operations (PVE cluster only needs a low latency network for a small amount of cluster traffic).
40Gbit should also be ok for the "SAN". I suspect you mean Ceph?
The storage in the servers may become the bottleneck, HDDs should be avoided at all cost. You likely can replace them with NVMe with little investment.

How do I do SAN?
If your goal is to get by with only existing assets, then the logical choice here is Ceph. You could use PVE built-in Ceph which is very easy to implement. However, that requires all nodes to be part of the PVE cluster. That would have the cluster at an even amount of nodes which is not optimal. You would be well advised to add a QDevice in your PoC. It would also mean heterogeneous hardware in the cluster which is not ideal.

The other option is to install/implement Ceph on your own.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Seeing that you have non-identical hardware, using the HPE as a SAN should work. May want to use ESOS [Enterprise Storage OS] (esos-project.com) for the SAN software.

Then setup the IBMs as as PVE cluster.

If the hardware was identical, I would recommend Ceph. You really, really want identical hardware for Ceph. Otherwise you'll be spending lots of time troubleshooting.
 
Thank you for your answers and sorry for the late reply.Ok, so I will install my 3 IBM servers under PVE, in a cluster.And I will test ESOS for my three HPE servers.

I used the Datacore solution in my previous company, does ESOS resemble this kind of SAN solution?

How to integrate my ESOS SAN storage with PVE?

Thank you for your answers.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!