RECOMMENDED SETUP FOR HA/CLUSTER CONFIGURATION

@Abhijit Roy Out of interest, do you have a rough diagram you could post of what you intend so far? It'll help with working out if something is missing. Also, about the earlier question on a fencing device, I work with smaller setups, so I haven't needed to use one personally, but if you are working in a power-unstable site, then I would recommend it. Users in South Africa, especially, have to do "exciting" things with their infrastructure due to the daily rolling blackouts and being able to time an outage before a storm hits can be convenient; after all, the infrastructure to talk to the site could "go missing".

My understanding with Ceph is that due to the data needing to be near-line, the Ceph storage is replicated on each Proxmox node. With two units being storage, what you propose sounds more like a SAN-style arrangement, typically using NFS over the network to the nodes? Having said that, I recommend reading the proxmox documentation more closely, as the Ceph requirements were pretty exacting.

Although it is very simplified, a lot can be learned from a low-power deployment which uses Ceph [I'm investigating a solar-powered cluster]:
https://www.youtube.com/watch?v=JfZuZ6zE7AI&ab_channel=RaidOwl
And there is a failure tolerant setup which can be learned from:
https://www.youtube.com/watch?v=74hor7682CI&ab_channel=ElectronicsWizardry

I find that a lot can be learned from people demonstrating things in the field, Youtube technical videos have a wealth of experience and ideas to bring to the table. No need to rush after all.

Note: Now, as an aside on Nutanix, I do remember sitting through one of their demonstrations and quickly determining the cost was exorbitant compared to simply doing everything myself on proxmox. Having said that, the free cold Coffee kit was rather nice.
 
  • Like
Reactions: pille99
My main intention overall is

1. To minimize downtime as much as possible.
2. Redundancy in live data of external storage as our data size is quite high in TB.
3. 0% latency and high IOPS

So I was thinking HA with ceph will be ideal solutions for me, correct me if I am wring.
 
  • Like
Reactions: Nuke Bloodaxe
1. To minimize downtime as much as possible.
There are 3 layers you need to invest in: Network, Storage and Compute. A combination of PVE+Ceph will give you Storage and Compute. A general recommendation is that for Ceph you need a dedicated redundant Storage network in addition to your redundant Client network.
If you go with Ceph, the network should be 25G or higher.
Redundancy in live data of external storage
Ceph is not external storage in a standard PVE installation. It is Hyper Converged, which means resources are shared and you need to adjust your Compute investment accordingly.
our data size is quite high in TB.
Thats really relative. For some 30TB is high, others operate in 100s of TB or even PBs.
0% latency and high IOPS
Latency is not measured in percentages, but rather in milli or micro seconds.
We published some articles that may be helpful to you, i.e. https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage/#optimized-bare-metal-latency
As you can see from the "Optimized Bare-Metal Latency" and "Non-optimized Guest Latency", a QD1/4K IO has a virtualization insertion latency of
54.1 μs - 22.3 μs = 31.8μs

Whether you can achieve similar results depends almost entirely on your budget and choice of software and hardware components.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: alexskysilk
Ceph is not external storage in a standard PVE installation. It is Hyper Converged, which means resources are shared and you need to adjust your Compute investment accordingly.
Can you pls explain this part in detail, I am not getting it properly, the way I am currently thinking to implement this setup is as follows

It will be a 3 node/server HA cluster, server local storages wiil content NVMe SSD with proxmox and vms (only Os part installed) installed, and external storage's (ceph) which will content ssd's and carry data part of all vm's may be like nfs mount of /home/ , or may be used as shared file system and there will be 2 separate storage hardware basically with identical configuration which will be redundant to each other and carry same data.

Kindly rectify if I am missing something
 
Last edited:
external storage's (ceph) which will content ssd's and carry data part of all vm's may be like nfs mount of /home/ , or may be used as shared file system and there will be 2 separate storage hardware basically with identical configuration which will be redundant to each other and carry same data.
Yes, you did mention it previously a while back, but the thread has been going on for a bit now so details get lost.
You were already advised in comment #15 that 2 node is not a valid Ceph configuration. There are many resources and discussions available online, ie https://www.reddit.com/r/ceph/comments/gmwczg/is_it_possible_to_have_a_2node_ceph_cluster_with/

If this is for a home lab - sure go for it. If you plan to run business on it I would not advise using storage as complex as Ceph in an unsupported configuration, or even in approved configuration but without vendor support.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Just want to rectify it is 3 node server HA cluster only, I am telling redundancy on external shared storage(ceph) which handle data volume of the vms, and it is a business/office setup
 
Just want to rectify it is 3 node server HA cluster only, I am telling redundancy on external shared storage(ceph) which handle data volume of the vms, and it is a business/office setup
I think we are on the same page:
1) You want Proxmox Hypervisor in a 3 node cluster configuration - great
2) You want external shared storage and you want it to be Ceph, and you plan to use two nodes in a Ceph cluster - not great


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Please suggest your recommendation in respect of storage, as I want to keep user data on separate hardware, and make sure redundancy of that storage hardware as well
 
Last edited:
  • Like
Reactions: alexskysilk
Please suggest your recommendation in respect of storage, as I want to keep user data on separate hardware, and make sure redundancy of that storage hardware as well
I do not see any benefits in running 3-node PVE cluster AND an additional 3-node CEPH cluster, I'd go with one cluster PVE+CEPH. Or go a totally different way with a "real" shared storage like any product available with FC/iSCSI SANs and HA-built-in (one ot two) units with synchronous replication ... whatever you want and is supported.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!