Advice for new Hyper converged platform

tbaror

Member
Jan 21, 2022
6
0
6
54
Hello All,

We are using for many years from XEN to XCP-NG , and recently i stumble in few article and recommendation for Proxmox, in addition looking on the product itself looks so comprehensive in term of features and manageability , especially the Hyper converged options and the fact that containers running natively on the host itself , and i asked myself how come never tested Proxmox , so e decided to test Proxmox first with modest platform(limited on budget) as described below, its will mostly used for IT dept monitoring and management environment, if it will run well we will later invest with more enterprise grade disks , all our monitoring and managment are dockerized and we plan to make it k8 and rancher in the future.
Later if all go well we would like to mount system for our QA environment that would be based on SuperMicro Hyper SuperServer based , and Hyper converged with Proxmox.

Current Platform with following spec:
Cluster quantity: 4x supermicro 1u servers
Cpu: 2x e5-2690 v4
Mem: 256GB
Net: 4x10Gb
Disk: options 10 x 2.5 disks , we plan to mount 6x 2TB Samsung 870 evo +2x 2TB sumsung nvme (for cache)

As for Hyper converged mode, i understand there is 2 options ,
First one is Ceph cluster which i don't have experience with, and have minimum knowledge on it , i understand its uses the physical available disks and cannot combined with ZFS volumes

Second option is create ZFS volumes and GlusterFS on top of it, i have minimum experience with it use to play with it back few years ago , but in case we go Glusterfs which mode is best suited for 3 nodes?

I would like to get your advice and tips for build setup on which is better option for hyper converged
For example network layout for each option and disk settings
Please Advice
Thank
 
Last edited:
I currently have 2 Proxmox Ceph clusters.

One is 3 x 1U 8-bay SAS servers using a full-mesh network (2 x 1GbE bonded). 2 of the drives bays are ZFS mirrored for Proxmox itself and the rest of the drive bays are OSDs (18 total). Works very well for 12-year old hardware. This is a stage cluster.

Other one is 4 X 2U 16-bay SAS servers using 10GbE networking with 2 switches for LAG with an external QDevice VM (witness device) for cluster quorum. As with the first cluster, 2 of the drive bays are ZFS mirrored for Proxmox itself and the rest of drive bays for OSDs (56 total). This is the production cluster.

I haven't had any issues with Proxmox with Ceph itself. Only issues I've had is hard disks failing due to SMART alerts. You'll want to follow the correct steps on replacing the OSD in Ceph.

You'll want to run the latest version of Ceph which is Pacific. It has alot of optimizations. Ceph is very fault tolerant. It was designed for "unreliable" situations.

I followed the instructions at https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster. You need to setup the cluster first before you install Ceph https://pve.proxmox.com/wiki/Cluster_Manager

There are a lot of YouTube videos on configuring Ceph on Proxmox.

As for Ceph networking, it's recommended to have the cluster (Corosync) and Ceph (public and private) traffic be on separate networks. In my production cluster, I put both Ceph (public and private) and Corosync traffic on the same 10GbE network via 2 LAG switches. Again, this is not considered optimal but it works for me.

I do run the Ceph monitors, meta data servers, and managers on all the servers. Some claim it's unnecessary but I do have the memory to support those processes anyway.

A very good resource is reddit.com/r/Proxmox. You can search there for Ceph networking setup posts.

The Proxmox GUI makes setting up Ceph and managing it very easy. You can always use the command-line if you need to.

So, go ahead and spend the time to get acquainted with Ceph. You be pleasantly surprised how well it works.
 
Last edited:
  • Like
Reactions: tbaror
I currently have 2 Proxmox Ceph clusters.

One is 3 x 1U 8-bay SAS servers using a full-mesh network (2 x 1GbE bonded). 2 of the drives bays are ZFS mirrored for Proxmox itself and the rest of the drive bays are OSDs (18 total). Works very well for 12-year old hardware. This is a stage cluster.

Other one is 4 X 2U 16-bay SAS servers using 10GbE networking with 2 switches for LAG with an external QDevice VM (witness device) for cluster quorum. As with the first cluster, 2 of the drive bays are ZFS mirrored for Proxmox itself and the rest of drive bays for OSDs (56 total). This is the production cluster.

I haven't had any issues with Proxmox with Ceph itself. Only issues I've had is hard disks failing due to SMART alerts. You'll want to follow the correct steps on replacing the OSD in Ceph.

You'll want to run the latest version of Ceph which is Pacific. It has alot of optimizations. Ceph is very fault tolerant. It was designed for "unreliable" situations.

I followed the instructions at https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster. You need to setup the cluster first before you install Ceph https://pve.proxmox.com/wiki/Cluster_Manager

There are a lot of YouTube videos on configuring Ceph on Proxmox.

As for Ceph networking, it's recommended to have the cluster (Corosync) and Ceph (public and private) traffic be on separate networks. In my production cluster, I put both Ceph (public and private) and Corosync traffic on the same 10GbE network via 2 LAG switches. Again, this is not considered optimal but it works for me.

I do run the Ceph monitors, meta data servers, and managers on all the servers. Some claim it's unnecessary but I do have the memory to support those processes anyway.

A very good resource is reddit.com/r/Proxmox. You can search there for Ceph networking setup posts.

The Proxmox GUI makes setting up Ceph and managing it very easy. You can always use the command-line if you need to.

So, go ahead and spend the time to get acquainted with Ceph. You be pleasantly surprised how well it works.
Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!