Hyper converged setup

klowet

Active Member
Jun 22, 2018
43
1
28
Hi

I'm going to build a new environment to host virtual machines. Most of them are webservers, some database servers and mailservers. On most VM's, cPanel or another hosting control panel will be installed for hosting websites. I'm looking into Proxmox, what the possibilities are versus the others, like VMware (except the price). I already read a lot about Proxmox and Ceph, but still have some questions.

Setup
I would like to create a hyper converged environment this time. No separated storage cluster anymore. How about this small network scheme? It's a simplified scheme, it doesn't contain the management and backup network etc, but just the PVE part.

The cluster consists of three nodes. Each node is connected to a 10G switch and to a uplink switch to the firewalls. The network is redundant.

Each Node would have 4 SSD's for OSD's (3x replica with the other nodes) and one disk for the PVE OS.

Nexxwave Proxmox network - Network Diagram.png

Questions
  1. Is this a good starting point, or would you do it differently?
  2. Should I use only SSD's? Or should I use HDD's for the OSD's and an SSD for the OSD journal? What I understand is that an OSD journal is prefered for better writing speeds. But I need high read speed (serving websites).
  3. If I choose for full SSD, should I put the journals still on separated SSD's? On just one SSD or on 2 in RAID1?
  4. The PVE OS disk, how big should that has to be? In RAID 1 or not?
  5. Doubts? Tips?
Thanks
 
Hi,

Your questions:

1. yes it sounds like a really good starting point.
2. If you can afford the SSD's for budget reasons, that's fine! But you should keep an
eye on the SSD type (should be enterprise grade, and most important is latency)
3. on full SSD setup a separate journal does not make sense, and always just one SSD, no RAID setup
the redundancy comes from CEPH
setting up multiple OSD's on one SSD can be a win: http://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments
4. the OS Disk can be small 128G should be enough.
5. Tipps:
- use a HBA, no RAID controller, definitly
- think of a second drive to setup the OS disk as ZFS Mirror
 
  • Like
Reactions: klowet
Thanks, Klaus.
How about the read speed of Ceph? Most data will be read and not written. The Journal SSD is only for writing purpose, right? Can you compare the read speed with eg. a RAID 5 or does that depends on the total amount of OSD's?

Is the Journal SSD also member of the Ceph cluster? Isn't it "just one" disk per Node? I thought: if you don't do any local redundancy with that disk, the data what isn't yet written to the ODS's, is lost. Or maybe ZFS? Isn't that correct?

I would use the Samsung PM863a Enterprise SSD. I'll look for some more info and reviews about that type.
 
The Journal is per OSD, though you can use one SSD with partitions for many OSD's, and no, you don't need extra redundancy on the Journal device.

The read speed is in the ideal case the read speed of the OSD device on which the corresponding PG's sit. More OSD's, more available bandwith.

The PM863 sounds good
 
Is the journal a partition on itself? Eg, can I create 4 primary partitions for OSD's or not?

Instead of the Samsung SSD, any opinion about the Sandisk CloudSpeed Eco Enterprise SSD range?
 
We have a 3 node CEPH hyperconverged cluster at my work, and the one thing that kept biting us in the ass from time to time, seemingly at random, was corosync toten re-transmissions.

The best thing we did was to separate the corosync network. To save on costs and since we won't ever need to increase the number of hosts we wired it host to host (no switch).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!