Hello,
I am in the process of sizing a Proxmox HA cluster as a private label cloud solution that we will resell. My design considerations are to allow for growth by adding nodes without downtime, and to ensure no single points of failure in the new environment. I was looking at Dell VXRAIL, but with VMWARE under broadcom being a question mark, I am looking at alternatives.
Currently I have 100 VMs totaling 540 vCPUs and 1100GB of active committed RAM with 58 TB of used storage based on Hyper-V. We intend to grow this; our target is to double it within a year.
What I think I need is a proxmox cluster, using ceph as shared storage.
According to the Ceph calculators, I am looking at 7 hosts, each with 5 7.68TB NVME Drives. which should give me 110TB+ of safe storage with 2 replicas.
My plan is also to have redundant LAG 100GB nics to redundant 100GB switches for Ceph storage, And redundant 10GB switches for VM traffic.
The idea is that once I have the networking, and storage setup, I can essentially scale this solution as needed by adding nodes. The question is there a limit to the number of vms a CEPH server could support? In VMware or Hyper-V we can calculate the VCPU and Memory load and go from there. But I can't seem to find information which would tell me how many VMs I can fit into a given solution while also accounting for the load CEPH would place.
Does it make sense to separate the VM and storage roles into separate systems?
I am in the process of sizing a Proxmox HA cluster as a private label cloud solution that we will resell. My design considerations are to allow for growth by adding nodes without downtime, and to ensure no single points of failure in the new environment. I was looking at Dell VXRAIL, but with VMWARE under broadcom being a question mark, I am looking at alternatives.
Currently I have 100 VMs totaling 540 vCPUs and 1100GB of active committed RAM with 58 TB of used storage based on Hyper-V. We intend to grow this; our target is to double it within a year.
What I think I need is a proxmox cluster, using ceph as shared storage.
According to the Ceph calculators, I am looking at 7 hosts, each with 5 7.68TB NVME Drives. which should give me 110TB+ of safe storage with 2 replicas.
My plan is also to have redundant LAG 100GB nics to redundant 100GB switches for Ceph storage, And redundant 10GB switches for VM traffic.
The idea is that once I have the networking, and storage setup, I can essentially scale this solution as needed by adding nodes. The question is there a limit to the number of vms a CEPH server could support? In VMware or Hyper-V we can calculate the VCPU and Memory load and go from there. But I can't seem to find information which would tell me how many VMs I can fit into a given solution while also accounting for the load CEPH would place.
Does it make sense to separate the VM and storage roles into separate systems?