New proxmox cluster separated or join to existing?

Lucian Lazar

Member
Apr 23, 2018
23
3
23
41
Romania
ecoit.ro
Hi all, we have a production running cluster of 4 nodes version 5.4 with CEPH over SATA disks, 4 monitors, 1GB/s network cards.
We have purchased 4 new servers, all with SSD's and 10GBe NIC's.
As at least half of the containers and VM's on the existing SATA cluster will have to be migrated to SSD, in your opinion what would be best to do:

1) Join existing cluster adding new nodes and migrate CT and VM to this new nodes;
2) Create a separate cluster, not joining the existing one and do backup/restore procedures

We would prefer option 1 as there will be a single management for all nodes in the cluster. But, we are not sure how to handle CEPH in this case. I know CEPH can manage different "classes" of OSD's, we can evetually create SSD class and a new pool with ods's only from this 4 new nodes? Also, the new cluster as a whole will be limited to overall speed to 1GB as 4 old nodes have only 1GB cards? I have red somewhere that CEPH limits it's pool performance to it's lowest disk speed, is this the case also for networking? having 2 CEPH classes let's assume SATA class and SSD class will affect somehow the ceph SSD class performance?
Thank you all in advance
 
Hi all, we have a production running cluster of 4 nodes version 5.4 with CEPH over SATA disks, 4 monitors, 1GB/s network cards.
We have purchased 4 new servers, all with SSD's and 10GBe NIC's.
As at least half of the containers and VM's on the existing SATA cluster will have to be migrated to SSD, in your opinion what would be best to do:

1) Join existing cluster adding new nodes and migrate CT and VM to this new nodes;
2) Create a separate cluster, not joining the existing one and do backup/restore procedures

We would prefer option 1 as there will be a single management for all nodes in the cluster. But, we are not sure how to handle CEPH in this case. I know CEPH can manage different "classes" of OSD's, we can evetually create SSD class and a new pool with ods's only from this 4 new nodes? Also, the new cluster as a whole will be limited to overall speed to 1GB as 4 old nodes have only 1GB cards? I have red somewhere that CEPH limits it's pool performance to it's lowest disk speed, is this the case also for networking? having 2 CEPH classes let's assume SATA class and SSD class will affect somehow the ceph SSD class performance?
Thank you all in advance


Depends on what you will have in the future. If you will have only the new nodes and remove the old nodes (or use them in the future for a different purpose) use simply option 1. In case of Ceph you will have a transition phase with mixed osd types, but after removing the SATA nodes you will have a pure SSD Ceph cluster.

If you will just expand your environmment have then SATA and SSD in parallel also option 1 is the better way; however, in order to avoid pools using different osd types you have to define the separation between them in a crushmap, see also http://docs.ceph.com/docs/jewel/rad...ap/#placing-different-pools-on-different-osds

I f you want to have two independent clusters (e.g. in order to avoid loose of quorum in case of only one cluster is down) use option 2.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!