Cluster 3 hosts datastore CEPH with HA at OVH Cloud

François093

Member
Sep 27, 2021
2
0
6
49
Hello,
I would like to set up a Proxmox cluster at OVH. 3 Hosts with CEPH for HA.
I was thinking of using the Advance 3 offer from OVH. I've mapped out a solution in my lab with 3 machines,
each with a 1TB SSD. After configuring a 1TB CEPH datastore including the SSD for each host.
Hot VM migration and HA works. I'd like to redo the same infrastructure at OVH.
I discovered that I'd have to add OVH's CDA option to be able to do HA.
I thought I could reproduce my CEPH datastore with disks from each host(2× 960GB SSD NVMe Soft RAID).
What's the best solution?

Best regards
 
Hello,

I'm in a somewhat similar situation, except that I did the lab at OVH.What I did:

  • Created a VRACK
  • Servers with 1 public network port
  • The other one with VRACK
  • Ceph configured on the VRACK
  • HA works (the servers are RISE-1 with 2 disks of 500 GB) I'm hesitant to do like you (get the ADVANCE-3); I'm just worried that in our case the 5 Gb/s bandwidth might be insufficient...
https://www.proxmox.com/en/download...cumentation/proxmox-ve-ceph-benchmark-2023-12The recommendations estimate that a minimum of 10 G/s is needed.

If you have viable alternatives, I'm open to suggestions.
 
  • Like
Reactions: ZipTX
Hi Guys

We have a cluster composed of 3 scale-i1 servers. 2 at GRA and 1 at RBX. All nics are linked to vrack so we got 100GB/s in GRA and about 70GB/s between GRA and RBX. Latency is about 1.2 ms and Ceph works well.

Maybe take a look at the offer with the 3 DC at Paris to reduce the latency.

Best regards
Pascal
 
  • Like
Reactions: ZipTX
Hi Pascal,
How do you calculate the 100GB/s? The private network on the scale-i1 is 25GB/s, which is sufficient for Ceph (whereas 5Gb/s is not, according to Proxmox).

How many disks are you using with Ceph? (3 or more?)

Best regards
 
With iperf and multithread. The bandwidth is 25GB/s by nic. We setted bonding with the 4 nics.

We got 6 Nvme 4T by server.