Is it possible to create a high-performance cluster using only two PCs with Ceph? What are the correct configuration steps?

rolandba

New Member
May 2, 2023
2
0
1
I have two PCs, each with 16 GB of RAM and 500 GB of SSD storage. I have created a cluster in Proxmox with them, but when running PowerBI in the first VM of the first node, it does not share the load with the other node. How can I create a high-performance cluster?

I am looking to optimize the performance of my Proxmox cluster, but I'm not sure how to do so. I want to ensure that my VMs can handle heavy workloads without any performance issues. My goal is to create a high-performance cluster that can handle the demands of PowerBI and other resource-intensive applications.

Any advice on how to properly configure my Proxmox cluster for optimal performance would be greatly appreciated. Thank you!
 
A generic VM cannot spread the CPU load of the application across multiple independent compute nodes. You need to implement an "application cluster", i.e, the application has to be aware that it can assign tasks to various VMs running on different physical nodes.
Creating an application cluster is application specific. Some have all the pieces available, others you have to "cobble" together. How to do either one is beyond the scope of this forum.
For a single VM to be able to handle "heavy" (in quotes as its relative) workload - the underlying compute node needs to provide sufficient resources. Sometimes these resources need to be dedicated and not shared with other VMS.

A basic Proxmox cluster (which requires either three nodes or two nodes and a quorum device) provides High Availability for the VM, i.e. if one node dies, the other will pick up its load. It seems you are at the very start of your research, if so - a common mistake is to load all nodes to 70-90%. As you can imagine combining two 90% loads on a single surviving node will not make anyone happy.

Building a highly available cluster that interacts appropriately with your application is complex task. Billion dollar companies exist (IBM) that consult others in how to do so. That said, there are many resources online where you can start learning, and the best way to learn is to experiment.

Good luck

P.S. although you only mentioned Ceph in the thread topic - two nodes is not a supported configuration for Ceph. It will work for home lab until it wont.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: UdoB and Neobin
Assuming that there are 3 nodes in the cluster, configuring the cluster with High Availability (HA) and a CEPH storage cluster should theoretically provide high performance, right?. When running an application, the resources of all three nodes would be utilized to process the application, effectively sharing the workload among them. Am I right?
 
There is no magic that will fuse the three servers together to a single big machine. Each server still can only access its own CPU/RAM. What you will get with ceph is a cluster storage but this doesn't have to be faster, as IO will be done over the slow network with additional latency instead of a fast local PCIe connection. It depends on the application if it was designed in a way that it can distribute workloads across multiple server over the network. Ceph and PVE clusters are about redundancy and reliability, not additional performance.
 
Last edited:
Assuming that there are 3 nodes in the cluster, configuring the cluster with High Availability (HA) and a CEPH storage
Yes, it is possible to configure Highly Available Compute and Storage, in this case Converged, cluster with 3 nodes.
r should theoretically provide high performance, right?
Depends on what you consider high performance, for the sake of argument lets say yes.
When running an application, the resources of all three nodes would be utilized to process the application, effectively sharing the workload among them. Am I right?
No. Not unless your application knows how to do so. I.e. there is a Controller that spins up a worker on each physical node and doles out a task to each. A single VM is always constrained to Compute, Network and Storage resources available to the physical node where this VM is running.

You seem to be conflating High Availability, High Performance and perhaps Super Computing. These are completely separate for the environment we are discussing in this forum. I am guessing you are thinking of Super Computer environments where nodes may be running with shared memory and the OS presents many nodes as a single virtual computer. Thats not what Proxmox, ESXi or HyperV do.

effectively sharing the workload among them. Am I right?
If your application knows how to do so. But if it does, you dont need a cluster or high availability. Such applications usually prefer share-nothing architectures and can self-heal.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: fabian

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!